00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 228 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3729 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.133 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.285 > git --version # 'git version 2.39.2' 00:00:00.285 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.307 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.307 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.330 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.345 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.355 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.355 > git config core.sparsecheckout # timeout=10 00:00:05.364 > git read-tree -mu HEAD # timeout=10 00:00:05.378 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.398 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.398 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.477 [Pipeline] Start of Pipeline 00:00:05.489 [Pipeline] library 00:00:05.491 Loading library shm_lib@master 00:00:05.491 Library shm_lib@master is cached. Copying from home. 00:00:05.509 [Pipeline] node 00:00:05.529 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.534 [Pipeline] { 00:00:05.542 [Pipeline] catchError 00:00:05.543 [Pipeline] { 00:00:05.555 [Pipeline] wrap 00:00:05.563 [Pipeline] { 00:00:05.570 [Pipeline] stage 00:00:05.572 [Pipeline] { (Prologue) 00:00:05.776 [Pipeline] sh 00:00:06.056 + logger -p user.info -t JENKINS-CI 00:00:06.072 [Pipeline] echo 00:00:06.074 Node: WFP4 00:00:06.081 [Pipeline] sh 00:00:06.370 [Pipeline] setCustomBuildProperty 00:00:06.378 [Pipeline] echo 00:00:06.379 Cleanup processes 00:00:06.383 [Pipeline] sh 00:00:06.659 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.659 3052465 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.672 [Pipeline] sh 00:00:06.952 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.952 ++ grep -v 'sudo pgrep' 00:00:06.952 ++ awk '{print $1}' 00:00:06.952 + sudo kill -9 00:00:06.952 + true 00:00:06.969 [Pipeline] cleanWs 00:00:06.979 [WS-CLEANUP] Deleting project workspace... 00:00:06.980 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.985 [WS-CLEANUP] done 00:00:06.989 [Pipeline] setCustomBuildProperty 00:00:07.003 [Pipeline] sh 00:00:07.283 + sudo git config --global --replace-all safe.directory '*' 00:00:07.433 [Pipeline] httpRequest 00:00:07.825 [Pipeline] echo 00:00:07.826 Sorcerer 10.211.164.20 is alive 00:00:07.832 [Pipeline] retry 00:00:07.833 [Pipeline] { 00:00:07.843 [Pipeline] httpRequest 00:00:07.847 HttpMethod: GET 00:00:07.847 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.847 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.860 Response Code: HTTP/1.1 200 OK 00:00:07.860 Success: Status code 200 is in the accepted range: 200,404 00:00:07.861 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.116 [Pipeline] } 00:00:11.132 [Pipeline] // retry 00:00:11.140 [Pipeline] sh 00:00:11.423 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.437 [Pipeline] httpRequest 00:00:11.856 [Pipeline] echo 00:00:11.858 Sorcerer 10.211.164.20 is alive 00:00:11.867 [Pipeline] retry 00:00:11.869 [Pipeline] { 00:00:11.884 [Pipeline] httpRequest 00:00:11.888 HttpMethod: GET 00:00:11.889 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:11.889 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:11.915 Response Code: HTTP/1.1 200 OK 00:00:11.915 Success: Status code 200 is in the accepted range: 200,404 00:00:11.915 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:52.506 [Pipeline] } 00:01:52.524 [Pipeline] // retry 00:01:52.532 [Pipeline] sh 00:01:52.816 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:55.361 [Pipeline] sh 00:01:55.646 + git -C spdk log --oneline -n5 00:01:55.646 b18e1bd62 version: v24.09.1-pre 00:01:55.646 19524ad45 version: v24.09 00:01:55.646 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:55.646 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:55.646 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:55.665 [Pipeline] withCredentials 00:01:55.676 > git --version # timeout=10 00:01:55.690 > git --version # 'git version 2.39.2' 00:01:55.706 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:55.708 [Pipeline] { 00:01:55.718 [Pipeline] retry 00:01:55.720 [Pipeline] { 00:01:55.736 [Pipeline] sh 00:01:56.018 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:56.967 [Pipeline] } 00:01:56.985 [Pipeline] // retry 00:01:56.990 [Pipeline] } 00:01:57.006 [Pipeline] // withCredentials 00:01:57.015 [Pipeline] httpRequest 00:01:57.385 [Pipeline] echo 00:01:57.387 Sorcerer 10.211.164.20 is alive 00:01:57.397 [Pipeline] retry 00:01:57.399 [Pipeline] { 00:01:57.413 [Pipeline] httpRequest 00:01:57.417 HttpMethod: GET 00:01:57.417 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:57.418 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:57.421 Response Code: HTTP/1.1 200 OK 00:01:57.422 Success: Status code 200 is in the accepted range: 200,404 00:01:57.422 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:04.114 [Pipeline] } 00:02:04.130 [Pipeline] // retry 00:02:04.138 [Pipeline] sh 00:02:04.422 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:05.810 [Pipeline] sh 00:02:06.090 + git -C dpdk log --oneline -n5 00:02:06.090 eeb0605f11 version: 23.11.0 00:02:06.090 238778122a doc: update release notes for 23.11 00:02:06.090 46aa6b3cfc doc: fix description of RSS features 00:02:06.090 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:06.090 7e421ae345 devtools: support skipping forbid rule check 00:02:06.100 [Pipeline] } 00:02:06.113 [Pipeline] // stage 00:02:06.122 [Pipeline] stage 00:02:06.124 [Pipeline] { (Prepare) 00:02:06.144 [Pipeline] writeFile 00:02:06.160 [Pipeline] sh 00:02:06.456 + logger -p user.info -t JENKINS-CI 00:02:06.468 [Pipeline] sh 00:02:06.751 + logger -p user.info -t JENKINS-CI 00:02:06.763 [Pipeline] sh 00:02:07.048 + cat autorun-spdk.conf 00:02:07.048 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.048 SPDK_TEST_NVMF=1 00:02:07.048 SPDK_TEST_NVME_CLI=1 00:02:07.048 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.048 SPDK_TEST_NVMF_NICS=e810 00:02:07.048 SPDK_TEST_VFIOUSER=1 00:02:07.048 SPDK_RUN_UBSAN=1 00:02:07.048 NET_TYPE=phy 00:02:07.048 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:07.048 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.056 RUN_NIGHTLY=1 00:02:07.060 [Pipeline] readFile 00:02:07.079 [Pipeline] withEnv 00:02:07.081 [Pipeline] { 00:02:07.092 [Pipeline] sh 00:02:07.379 + set -ex 00:02:07.380 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:07.380 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.380 ++ SPDK_TEST_NVMF=1 00:02:07.380 ++ SPDK_TEST_NVME_CLI=1 00:02:07.380 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.380 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.380 ++ SPDK_TEST_VFIOUSER=1 00:02:07.380 ++ SPDK_RUN_UBSAN=1 00:02:07.380 ++ NET_TYPE=phy 00:02:07.380 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:07.380 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.380 ++ RUN_NIGHTLY=1 00:02:07.380 + case $SPDK_TEST_NVMF_NICS in 00:02:07.380 + DRIVERS=ice 00:02:07.380 + [[ tcp == \r\d\m\a ]] 00:02:07.380 + [[ -n ice ]] 00:02:07.380 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:07.380 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:07.380 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:07.380 rmmod: ERROR: Module i40iw is not currently loaded 00:02:07.380 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:07.380 + true 00:02:07.380 + for D in $DRIVERS 00:02:07.380 + sudo modprobe ice 00:02:07.380 + exit 0 00:02:07.389 [Pipeline] } 00:02:07.402 [Pipeline] // withEnv 00:02:07.405 [Pipeline] } 00:02:07.413 [Pipeline] // stage 00:02:07.420 [Pipeline] catchError 00:02:07.421 [Pipeline] { 00:02:07.430 [Pipeline] timeout 00:02:07.430 Timeout set to expire in 1 hr 0 min 00:02:07.431 [Pipeline] { 00:02:07.440 [Pipeline] stage 00:02:07.441 [Pipeline] { (Tests) 00:02:07.450 [Pipeline] sh 00:02:07.732 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.732 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.732 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.732 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:07.732 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.732 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:07.732 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:07.732 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:07.732 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:07.732 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:07.732 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:07.732 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:07.732 + source /etc/os-release 00:02:07.732 ++ NAME='Fedora Linux' 00:02:07.732 ++ VERSION='39 (Cloud Edition)' 00:02:07.732 ++ ID=fedora 00:02:07.732 ++ VERSION_ID=39 00:02:07.732 ++ VERSION_CODENAME= 00:02:07.732 ++ PLATFORM_ID=platform:f39 00:02:07.732 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.732 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.732 ++ LOGO=fedora-logo-icon 00:02:07.732 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.732 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.732 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.732 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.732 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.732 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.732 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.732 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.732 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.732 ++ SUPPORT_END=2024-11-12 00:02:07.732 ++ VARIANT='Cloud Edition' 00:02:07.732 ++ VARIANT_ID=cloud 00:02:07.732 + uname -a 00:02:07.732 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:07.732 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:10.274 Hugepages 00:02:10.274 node hugesize free / total 00:02:10.274 node0 1048576kB 0 / 0 00:02:10.274 node0 2048kB 0 / 0 00:02:10.274 node1 1048576kB 0 / 0 00:02:10.274 node1 2048kB 0 / 0 00:02:10.274 00:02:10.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.274 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:10.274 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:10.274 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:10.274 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:10.274 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:10.274 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:10.274 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:10.274 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:10.274 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:10.274 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:10.274 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:10.274 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:10.274 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:10.274 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:10.274 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:10.274 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:10.274 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:10.274 + rm -f /tmp/spdk-ld-path 00:02:10.274 + source autorun-spdk.conf 00:02:10.274 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.274 ++ SPDK_TEST_NVMF=1 00:02:10.274 ++ SPDK_TEST_NVME_CLI=1 00:02:10.274 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.274 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.274 ++ SPDK_TEST_VFIOUSER=1 00:02:10.274 ++ SPDK_RUN_UBSAN=1 00:02:10.274 ++ NET_TYPE=phy 00:02:10.274 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.274 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.274 ++ RUN_NIGHTLY=1 00:02:10.274 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.274 + [[ -n '' ]] 00:02:10.274 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.274 + for M in /var/spdk/build-*-manifest.txt 00:02:10.274 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.275 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.275 + for M in /var/spdk/build-*-manifest.txt 00:02:10.275 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.275 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.275 + for M in /var/spdk/build-*-manifest.txt 00:02:10.275 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.275 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.275 ++ uname 00:02:10.275 + [[ Linux == \L\i\n\u\x ]] 00:02:10.275 + sudo dmesg -T 00:02:10.275 + sudo dmesg --clear 00:02:10.534 + dmesg_pid=3053940 00:02:10.534 + [[ Fedora Linux == FreeBSD ]] 00:02:10.534 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.534 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.534 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.534 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.534 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.534 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.534 + sudo dmesg -Tw 00:02:10.534 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.534 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.534 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.534 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.534 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.534 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.534 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.534 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.534 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.534 Test configuration: 00:02:10.534 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.534 SPDK_TEST_NVMF=1 00:02:10.534 SPDK_TEST_NVME_CLI=1 00:02:10.534 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.534 SPDK_TEST_NVMF_NICS=e810 00:02:10.534 SPDK_TEST_VFIOUSER=1 00:02:10.534 SPDK_RUN_UBSAN=1 00:02:10.534 NET_TYPE=phy 00:02:10.534 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.534 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.534 RUN_NIGHTLY=1 05:30:44 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:10.534 05:30:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.534 05:30:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.535 05:30:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.535 05:30:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.535 05:30:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.535 05:30:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.535 05:30:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.535 05:30:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.535 05:30:44 -- paths/export.sh@5 -- $ export PATH 00:02:10.535 05:30:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.535 05:30:44 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.535 05:30:44 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:10.535 05:30:44 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734323444.XXXXXX 00:02:10.535 05:30:44 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734323444.KyLupd 00:02:10.535 05:30:44 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:10.535 05:30:44 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:10.535 05:30:44 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.535 05:30:44 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:10.535 05:30:44 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:10.535 05:30:44 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.535 05:30:44 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:10.535 05:30:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:10.535 05:30:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.535 05:30:44 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:10.535 05:30:44 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:10.535 05:30:44 -- pm/common@17 -- $ local monitor 00:02:10.535 05:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.535 05:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.535 05:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.535 05:30:44 -- pm/common@21 -- $ date +%s 00:02:10.535 05:30:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.535 05:30:44 -- pm/common@21 -- $ date +%s 00:02:10.535 05:30:44 -- pm/common@25 -- $ sleep 1 00:02:10.535 05:30:44 -- pm/common@21 -- $ date +%s 00:02:10.535 05:30:44 -- pm/common@21 -- $ date +%s 00:02:10.535 05:30:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734323444 00:02:10.535 05:30:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734323444 00:02:10.535 05:30:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734323444 00:02:10.535 05:30:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734323444 00:02:10.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734323444_collect-cpu-load.pm.log 00:02:10.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734323444_collect-vmstat.pm.log 00:02:10.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734323444_collect-cpu-temp.pm.log 00:02:10.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734323444_collect-bmc-pm.bmc.pm.log 00:02:11.473 05:30:45 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:11.473 05:30:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.473 05:30:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.473 05:30:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.473 05:30:45 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.473 Mon Dec 16 04:30:45 AM UTC 2024 00:02:11.473 05:30:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.473 v24.09-1-gb18e1bd62 00:02:11.473 05:30:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.473 05:30:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.473 05:30:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.473 05:30:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.473 05:30:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.473 05:30:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.473 ************************************ 00:02:11.473 START TEST ubsan 00:02:11.473 ************************************ 00:02:11.473 05:30:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:11.473 using ubsan 00:02:11.473 00:02:11.473 real 0m0.000s 00:02:11.473 user 0m0.000s 00:02:11.473 sys 0m0.000s 00:02:11.473 05:30:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:11.473 05:30:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.473 ************************************ 00:02:11.473 END TEST ubsan 00:02:11.473 ************************************ 00:02:11.732 05:30:45 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:11.732 05:30:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:11.732 05:30:45 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:11.732 05:30:45 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:11.732 05:30:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.732 05:30:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.732 ************************************ 00:02:11.732 START TEST build_native_dpdk 00:02:11.732 ************************************ 00:02:11.732 05:30:45 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:11.732 eeb0605f11 version: 23.11.0 00:02:11.732 238778122a doc: update release notes for 23.11 00:02:11.732 46aa6b3cfc doc: fix description of RSS features 00:02:11.732 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:11.732 7e421ae345 devtools: support skipping forbid rule check 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:11.732 patching file config/rte_config.h 00:02:11.732 Hunk #1 succeeded at 60 (offset 1 line). 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:11.732 patching file lib/pcapng/rte_pcapng.c 00:02:11.732 05:30:45 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.732 05:30:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.733 05:30:45 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:11.733 05:30:45 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:11.733 05:30:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:11.733 05:30:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:11.733 05:30:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:11.733 05:30:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:17.009 The Meson build system 00:02:17.009 Version: 1.5.0 00:02:17.009 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:17.009 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:17.009 Build type: native build 00:02:17.009 Program cat found: YES (/usr/bin/cat) 00:02:17.009 Project name: DPDK 00:02:17.009 Project version: 23.11.0 00:02:17.009 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.009 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:17.009 Host machine cpu family: x86_64 00:02:17.009 Host machine cpu: x86_64 00:02:17.009 Message: ## Building in Developer Mode ## 00:02:17.009 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.009 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:17.009 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.009 Program python3 found: YES (/usr/bin/python3) 00:02:17.009 Program cat found: YES (/usr/bin/cat) 00:02:17.009 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:17.009 Compiler for C supports arguments -march=native: YES 00:02:17.009 Checking for size of "void *" : 8 00:02:17.009 Checking for size of "void *" : 8 (cached) 00:02:17.009 Library m found: YES 00:02:17.009 Library numa found: YES 00:02:17.009 Has header "numaif.h" : YES 00:02:17.009 Library fdt found: NO 00:02:17.009 Library execinfo found: NO 00:02:17.009 Has header "execinfo.h" : YES 00:02:17.009 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.009 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.009 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.009 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.009 Run-time dependency openssl found: YES 3.1.1 00:02:17.009 Run-time dependency libpcap found: YES 1.10.4 00:02:17.009 Has header "pcap.h" with dependency libpcap: YES 00:02:17.009 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.009 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.009 Compiler for C supports arguments -Wformat: YES 00:02:17.009 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.009 Compiler for C supports arguments -Wformat-security: NO 00:02:17.009 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.009 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.009 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.009 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.009 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.009 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.009 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.009 Compiler for C supports arguments -Wundef: YES 00:02:17.009 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.009 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.009 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.010 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.010 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.010 Program objdump found: YES (/usr/bin/objdump) 00:02:17.010 Compiler for C supports arguments -mavx512f: YES 00:02:17.010 Checking if "AVX512 checking" compiles: YES 00:02:17.010 Fetching value of define "__SSE4_2__" : 1 00:02:17.010 Fetching value of define "__AES__" : 1 00:02:17.010 Fetching value of define "__AVX__" : 1 00:02:17.010 Fetching value of define "__AVX2__" : 1 00:02:17.010 Fetching value of define "__AVX512BW__" : 1 00:02:17.010 Fetching value of define "__AVX512CD__" : 1 00:02:17.010 Fetching value of define "__AVX512DQ__" : 1 00:02:17.010 Fetching value of define "__AVX512F__" : 1 00:02:17.010 Fetching value of define "__AVX512VL__" : 1 00:02:17.010 Fetching value of define "__PCLMUL__" : 1 00:02:17.010 Fetching value of define "__RDRND__" : 1 00:02:17.010 Fetching value of define "__RDSEED__" : 1 00:02:17.010 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.010 Fetching value of define "__znver1__" : (undefined) 00:02:17.010 Fetching value of define "__znver2__" : (undefined) 00:02:17.010 Fetching value of define "__znver3__" : (undefined) 00:02:17.010 Fetching value of define "__znver4__" : (undefined) 00:02:17.010 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.010 Message: lib/log: Defining dependency "log" 00:02:17.010 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.010 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.010 Checking for function "getentropy" : NO 00:02:17.010 Message: lib/eal: Defining dependency "eal" 00:02:17.010 Message: lib/ring: Defining dependency "ring" 00:02:17.010 Message: lib/rcu: Defining dependency "rcu" 00:02:17.010 Message: lib/mempool: Defining dependency "mempool" 00:02:17.010 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.010 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.010 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.010 Compiler for C supports arguments -mpclmul: YES 00:02:17.010 Compiler for C supports arguments -maes: YES 00:02:17.010 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.010 Compiler for C supports arguments -mavx512bw: YES 00:02:17.010 Compiler for C supports arguments -mavx512dq: YES 00:02:17.010 Compiler for C supports arguments -mavx512vl: YES 00:02:17.010 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.010 Compiler for C supports arguments -mavx2: YES 00:02:17.010 Compiler for C supports arguments -mavx: YES 00:02:17.010 Message: lib/net: Defining dependency "net" 00:02:17.010 Message: lib/meter: Defining dependency "meter" 00:02:17.010 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.010 Message: lib/pci: Defining dependency "pci" 00:02:17.010 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.010 Message: lib/metrics: Defining dependency "metrics" 00:02:17.010 Message: lib/hash: Defining dependency "hash" 00:02:17.010 Message: lib/timer: Defining dependency "timer" 00:02:17.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.010 Message: lib/acl: Defining dependency "acl" 00:02:17.010 Message: lib/bbdev: Defining dependency "bbdev" 00:02:17.010 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:17.010 Run-time dependency libelf found: YES 0.191 00:02:17.010 Message: lib/bpf: Defining dependency "bpf" 00:02:17.010 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:17.010 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.010 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.010 Message: lib/distributor: Defining dependency "distributor" 00:02:17.010 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.010 Message: lib/efd: Defining dependency "efd" 00:02:17.010 Message: lib/eventdev: Defining dependency "eventdev" 00:02:17.010 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:17.010 Message: lib/gpudev: Defining dependency "gpudev" 00:02:17.010 Message: lib/gro: Defining dependency "gro" 00:02:17.010 Message: lib/gso: Defining dependency "gso" 00:02:17.010 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:17.010 Message: lib/jobstats: Defining dependency "jobstats" 00:02:17.010 Message: lib/latencystats: Defining dependency "latencystats" 00:02:17.010 Message: lib/lpm: Defining dependency "lpm" 00:02:17.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:17.010 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:17.010 Message: lib/member: Defining dependency "member" 00:02:17.010 Message: lib/pcapng: Defining dependency "pcapng" 00:02:17.010 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.010 Message: lib/power: Defining dependency "power" 00:02:17.010 Message: lib/rawdev: Defining dependency "rawdev" 00:02:17.010 Message: lib/regexdev: Defining dependency "regexdev" 00:02:17.010 Message: lib/mldev: Defining dependency "mldev" 00:02:17.010 Message: lib/rib: Defining dependency "rib" 00:02:17.010 Message: lib/reorder: Defining dependency "reorder" 00:02:17.010 Message: lib/sched: Defining dependency "sched" 00:02:17.010 Message: lib/security: Defining dependency "security" 00:02:17.010 Message: lib/stack: Defining dependency "stack" 00:02:17.010 Has header "linux/userfaultfd.h" : YES 00:02:17.010 Has header "linux/vduse.h" : YES 00:02:17.010 Message: lib/vhost: Defining dependency "vhost" 00:02:17.010 Message: lib/ipsec: Defining dependency "ipsec" 00:02:17.010 Message: lib/pdcp: Defining dependency "pdcp" 00:02:17.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.010 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.010 Message: lib/fib: Defining dependency "fib" 00:02:17.010 Message: lib/port: Defining dependency "port" 00:02:17.010 Message: lib/pdump: Defining dependency "pdump" 00:02:17.010 Message: lib/table: Defining dependency "table" 00:02:17.010 Message: lib/pipeline: Defining dependency "pipeline" 00:02:17.010 Message: lib/graph: Defining dependency "graph" 00:02:17.010 Message: lib/node: Defining dependency "node" 00:02:17.010 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.579 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.579 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.579 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.579 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:17.579 Compiler for C supports arguments -Wno-unused-value: YES 00:02:17.579 Compiler for C supports arguments -Wno-format: YES 00:02:17.579 Compiler for C supports arguments -Wno-format-security: YES 00:02:17.579 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:17.579 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:17.579 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:17.579 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:17.579 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.579 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.579 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.579 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:17.579 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:17.579 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:17.579 Has header "sys/epoll.h" : YES 00:02:17.579 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.579 Configuring doxy-api-html.conf using configuration 00:02:17.579 Configuring doxy-api-man.conf using configuration 00:02:17.579 Program mandb found: YES (/usr/bin/mandb) 00:02:17.579 Program sphinx-build found: NO 00:02:17.579 Configuring rte_build_config.h using configuration 00:02:17.579 Message: 00:02:17.579 ================= 00:02:17.579 Applications Enabled 00:02:17.579 ================= 00:02:17.579 00:02:17.579 apps: 00:02:17.579 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:17.579 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:17.579 test-pmd, test-regex, test-sad, test-security-perf, 00:02:17.579 00:02:17.579 Message: 00:02:17.579 ================= 00:02:17.579 Libraries Enabled 00:02:17.579 ================= 00:02:17.579 00:02:17.579 libs: 00:02:17.579 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.579 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:17.579 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:17.579 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:17.579 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:17.580 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:17.580 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:17.580 00:02:17.580 00:02:17.580 Message: 00:02:17.580 =============== 00:02:17.580 Drivers Enabled 00:02:17.580 =============== 00:02:17.580 00:02:17.580 common: 00:02:17.580 00:02:17.580 bus: 00:02:17.580 pci, vdev, 00:02:17.580 mempool: 00:02:17.580 ring, 00:02:17.580 dma: 00:02:17.580 00:02:17.580 net: 00:02:17.580 i40e, 00:02:17.580 raw: 00:02:17.580 00:02:17.580 crypto: 00:02:17.580 00:02:17.580 compress: 00:02:17.580 00:02:17.580 regex: 00:02:17.580 00:02:17.580 ml: 00:02:17.580 00:02:17.580 vdpa: 00:02:17.580 00:02:17.580 event: 00:02:17.580 00:02:17.580 baseband: 00:02:17.580 00:02:17.580 gpu: 00:02:17.580 00:02:17.580 00:02:17.580 Message: 00:02:17.580 ================= 00:02:17.580 Content Skipped 00:02:17.580 ================= 00:02:17.580 00:02:17.580 apps: 00:02:17.580 00:02:17.580 libs: 00:02:17.580 00:02:17.580 drivers: 00:02:17.580 common/cpt: not in enabled drivers build config 00:02:17.580 common/dpaax: not in enabled drivers build config 00:02:17.580 common/iavf: not in enabled drivers build config 00:02:17.580 common/idpf: not in enabled drivers build config 00:02:17.580 common/mvep: not in enabled drivers build config 00:02:17.580 common/octeontx: not in enabled drivers build config 00:02:17.580 bus/auxiliary: not in enabled drivers build config 00:02:17.580 bus/cdx: not in enabled drivers build config 00:02:17.580 bus/dpaa: not in enabled drivers build config 00:02:17.580 bus/fslmc: not in enabled drivers build config 00:02:17.580 bus/ifpga: not in enabled drivers build config 00:02:17.580 bus/platform: not in enabled drivers build config 00:02:17.580 bus/vmbus: not in enabled drivers build config 00:02:17.580 common/cnxk: not in enabled drivers build config 00:02:17.580 common/mlx5: not in enabled drivers build config 00:02:17.580 common/nfp: not in enabled drivers build config 00:02:17.580 common/qat: not in enabled drivers build config 00:02:17.580 common/sfc_efx: not in enabled drivers build config 00:02:17.580 mempool/bucket: not in enabled drivers build config 00:02:17.580 mempool/cnxk: not in enabled drivers build config 00:02:17.580 mempool/dpaa: not in enabled drivers build config 00:02:17.580 mempool/dpaa2: not in enabled drivers build config 00:02:17.580 mempool/octeontx: not in enabled drivers build config 00:02:17.580 mempool/stack: not in enabled drivers build config 00:02:17.580 dma/cnxk: not in enabled drivers build config 00:02:17.580 dma/dpaa: not in enabled drivers build config 00:02:17.580 dma/dpaa2: not in enabled drivers build config 00:02:17.580 dma/hisilicon: not in enabled drivers build config 00:02:17.580 dma/idxd: not in enabled drivers build config 00:02:17.580 dma/ioat: not in enabled drivers build config 00:02:17.580 dma/skeleton: not in enabled drivers build config 00:02:17.580 net/af_packet: not in enabled drivers build config 00:02:17.580 net/af_xdp: not in enabled drivers build config 00:02:17.580 net/ark: not in enabled drivers build config 00:02:17.580 net/atlantic: not in enabled drivers build config 00:02:17.580 net/avp: not in enabled drivers build config 00:02:17.580 net/axgbe: not in enabled drivers build config 00:02:17.580 net/bnx2x: not in enabled drivers build config 00:02:17.580 net/bnxt: not in enabled drivers build config 00:02:17.580 net/bonding: not in enabled drivers build config 00:02:17.580 net/cnxk: not in enabled drivers build config 00:02:17.580 net/cpfl: not in enabled drivers build config 00:02:17.580 net/cxgbe: not in enabled drivers build config 00:02:17.580 net/dpaa: not in enabled drivers build config 00:02:17.580 net/dpaa2: not in enabled drivers build config 00:02:17.580 net/e1000: not in enabled drivers build config 00:02:17.580 net/ena: not in enabled drivers build config 00:02:17.580 net/enetc: not in enabled drivers build config 00:02:17.580 net/enetfec: not in enabled drivers build config 00:02:17.580 net/enic: not in enabled drivers build config 00:02:17.580 net/failsafe: not in enabled drivers build config 00:02:17.580 net/fm10k: not in enabled drivers build config 00:02:17.580 net/gve: not in enabled drivers build config 00:02:17.580 net/hinic: not in enabled drivers build config 00:02:17.580 net/hns3: not in enabled drivers build config 00:02:17.580 net/iavf: not in enabled drivers build config 00:02:17.580 net/ice: not in enabled drivers build config 00:02:17.580 net/idpf: not in enabled drivers build config 00:02:17.580 net/igc: not in enabled drivers build config 00:02:17.580 net/ionic: not in enabled drivers build config 00:02:17.580 net/ipn3ke: not in enabled drivers build config 00:02:17.580 net/ixgbe: not in enabled drivers build config 00:02:17.580 net/mana: not in enabled drivers build config 00:02:17.580 net/memif: not in enabled drivers build config 00:02:17.580 net/mlx4: not in enabled drivers build config 00:02:17.580 net/mlx5: not in enabled drivers build config 00:02:17.580 net/mvneta: not in enabled drivers build config 00:02:17.580 net/mvpp2: not in enabled drivers build config 00:02:17.580 net/netvsc: not in enabled drivers build config 00:02:17.580 net/nfb: not in enabled drivers build config 00:02:17.580 net/nfp: not in enabled drivers build config 00:02:17.580 net/ngbe: not in enabled drivers build config 00:02:17.580 net/null: not in enabled drivers build config 00:02:17.580 net/octeontx: not in enabled drivers build config 00:02:17.580 net/octeon_ep: not in enabled drivers build config 00:02:17.580 net/pcap: not in enabled drivers build config 00:02:17.580 net/pfe: not in enabled drivers build config 00:02:17.580 net/qede: not in enabled drivers build config 00:02:17.580 net/ring: not in enabled drivers build config 00:02:17.580 net/sfc: not in enabled drivers build config 00:02:17.580 net/softnic: not in enabled drivers build config 00:02:17.580 net/tap: not in enabled drivers build config 00:02:17.580 net/thunderx: not in enabled drivers build config 00:02:17.580 net/txgbe: not in enabled drivers build config 00:02:17.580 net/vdev_netvsc: not in enabled drivers build config 00:02:17.580 net/vhost: not in enabled drivers build config 00:02:17.580 net/virtio: not in enabled drivers build config 00:02:17.580 net/vmxnet3: not in enabled drivers build config 00:02:17.580 raw/cnxk_bphy: not in enabled drivers build config 00:02:17.580 raw/cnxk_gpio: not in enabled drivers build config 00:02:17.580 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:17.580 raw/ifpga: not in enabled drivers build config 00:02:17.580 raw/ntb: not in enabled drivers build config 00:02:17.580 raw/skeleton: not in enabled drivers build config 00:02:17.580 crypto/armv8: not in enabled drivers build config 00:02:17.580 crypto/bcmfs: not in enabled drivers build config 00:02:17.580 crypto/caam_jr: not in enabled drivers build config 00:02:17.580 crypto/ccp: not in enabled drivers build config 00:02:17.580 crypto/cnxk: not in enabled drivers build config 00:02:17.580 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.580 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.580 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.580 crypto/mlx5: not in enabled drivers build config 00:02:17.580 crypto/mvsam: not in enabled drivers build config 00:02:17.580 crypto/nitrox: not in enabled drivers build config 00:02:17.580 crypto/null: not in enabled drivers build config 00:02:17.580 crypto/octeontx: not in enabled drivers build config 00:02:17.580 crypto/openssl: not in enabled drivers build config 00:02:17.580 crypto/scheduler: not in enabled drivers build config 00:02:17.580 crypto/uadk: not in enabled drivers build config 00:02:17.580 crypto/virtio: not in enabled drivers build config 00:02:17.580 compress/isal: not in enabled drivers build config 00:02:17.580 compress/mlx5: not in enabled drivers build config 00:02:17.580 compress/octeontx: not in enabled drivers build config 00:02:17.580 compress/zlib: not in enabled drivers build config 00:02:17.580 regex/mlx5: not in enabled drivers build config 00:02:17.580 regex/cn9k: not in enabled drivers build config 00:02:17.580 ml/cnxk: not in enabled drivers build config 00:02:17.580 vdpa/ifc: not in enabled drivers build config 00:02:17.580 vdpa/mlx5: not in enabled drivers build config 00:02:17.580 vdpa/nfp: not in enabled drivers build config 00:02:17.580 vdpa/sfc: not in enabled drivers build config 00:02:17.580 event/cnxk: not in enabled drivers build config 00:02:17.580 event/dlb2: not in enabled drivers build config 00:02:17.580 event/dpaa: not in enabled drivers build config 00:02:17.580 event/dpaa2: not in enabled drivers build config 00:02:17.580 event/dsw: not in enabled drivers build config 00:02:17.580 event/opdl: not in enabled drivers build config 00:02:17.580 event/skeleton: not in enabled drivers build config 00:02:17.580 event/sw: not in enabled drivers build config 00:02:17.580 event/octeontx: not in enabled drivers build config 00:02:17.580 baseband/acc: not in enabled drivers build config 00:02:17.580 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:17.580 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:17.580 baseband/la12xx: not in enabled drivers build config 00:02:17.580 baseband/null: not in enabled drivers build config 00:02:17.580 baseband/turbo_sw: not in enabled drivers build config 00:02:17.580 gpu/cuda: not in enabled drivers build config 00:02:17.580 00:02:17.580 00:02:17.580 Build targets in project: 217 00:02:17.580 00:02:17.580 DPDK 23.11.0 00:02:17.580 00:02:17.580 User defined options 00:02:17.580 libdir : lib 00:02:17.580 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:17.580 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:17.580 c_link_args : 00:02:17.580 enable_docs : false 00:02:17.580 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:17.580 enable_kmods : false 00:02:17.580 machine : native 00:02:17.580 tests : false 00:02:17.580 00:02:17.580 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.580 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:17.580 05:30:51 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:17.846 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:17.846 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.846 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.846 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.846 [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.846 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.846 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.846 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.846 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.846 [9/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.846 [10/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.846 [11/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.846 [12/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.846 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.846 [14/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:18.111 [15/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.111 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.111 [17/707] Linking static target lib/librte_kvargs.a 00:02:18.111 [18/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.111 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.111 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.111 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.111 [22/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:18.111 [23/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.111 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.111 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.111 [26/707] Linking static target lib/librte_pci.a 00:02:18.111 [27/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:18.111 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.111 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.111 [30/707] Linking static target lib/librte_log.a 00:02:18.111 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.111 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.111 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.111 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.380 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.380 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.380 [37/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.380 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.380 [39/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.380 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.380 [41/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.380 [42/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.380 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.643 [44/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.643 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.643 [46/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.643 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.643 [48/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.643 [49/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.643 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.643 [51/707] Linking static target lib/librte_meter.a 00:02:18.643 [52/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.643 [53/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.643 [54/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.643 [55/707] Linking static target lib/librte_ring.a 00:02:18.643 [56/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.643 [57/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.643 [58/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.643 [59/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.643 [60/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.643 [61/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.643 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.643 [63/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.643 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.643 [65/707] Linking static target lib/librte_cmdline.a 00:02:18.643 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.643 [67/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.643 [68/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.643 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.643 [70/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.643 [71/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.643 [72/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.643 [73/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.643 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.643 [75/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.643 [76/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.643 [77/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.909 [78/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.909 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.909 [80/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.909 [81/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.909 [82/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.909 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.909 [84/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.909 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.909 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.909 [87/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.909 [88/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.909 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.909 [90/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.909 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.909 [92/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:18.909 [93/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.909 [94/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.909 [95/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.909 [96/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.909 [97/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.909 [98/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.909 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.909 [100/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.909 [101/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.909 [102/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.909 [103/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.909 [104/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.909 [105/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.909 [106/707] Linking static target lib/librte_net.a 00:02:18.909 [107/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.909 [108/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.909 [109/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.909 [110/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:18.909 [111/707] Linking static target lib/librte_metrics.a 00:02:18.909 [112/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:19.173 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.173 [114/707] Linking target lib/librte_log.so.24.0 00:02:19.173 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.173 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.173 [117/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.173 [118/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.173 [119/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:19.173 [120/707] Linking static target lib/librte_cfgfile.a 00:02:19.173 [121/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.173 [122/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:19.173 [123/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:19.173 [124/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.173 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:19.173 [126/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.173 [127/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:19.173 [128/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.173 [129/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.173 [130/707] Linking static target lib/librte_mempool.a 00:02:19.173 [131/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:19.173 [132/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:19.173 [133/707] Linking static target lib/librte_bitratestats.a 00:02:19.173 [134/707] Linking target lib/librte_kvargs.so.24.0 00:02:19.440 [135/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.440 [136/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.441 [137/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:19.441 [138/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.441 [139/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.441 [140/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.441 [141/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.441 [142/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.441 [143/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.441 [144/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:19.441 [145/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.441 [146/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.441 [147/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:19.441 [148/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.441 [149/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:19.441 [150/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:19.441 [151/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:19.441 [152/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:19.441 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.441 [154/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:19.441 [155/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:19.441 [156/707] Linking static target lib/librte_timer.a 00:02:19.703 [157/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:19.703 [158/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.703 [159/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:19.703 [160/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.703 [161/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.703 [162/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.703 [163/707] Linking static target lib/librte_compressdev.a 00:02:19.703 [164/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:19.703 [165/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:19.703 [166/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:19.703 [167/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:19.703 [168/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.703 [169/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.703 [170/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.703 [171/707] Linking static target lib/librte_jobstats.a 00:02:19.703 [172/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:19.703 [173/707] Linking static target lib/librte_telemetry.a 00:02:19.703 [174/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:19.703 [175/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.703 [176/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.703 [177/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.703 [178/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:19.703 [179/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:19.703 [180/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:19.703 [181/707] Linking static target lib/librte_bbdev.a 00:02:19.703 [182/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:19.969 [183/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:19.969 [184/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.969 [185/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:19.969 [186/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:19.969 [187/707] Linking static target lib/librte_dispatcher.a 00:02:19.969 [188/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:19.969 [189/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:19.969 [190/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:19.969 [191/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:19.969 [192/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:19.969 [193/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:19.969 [194/707] Linking static target lib/librte_latencystats.a 00:02:19.969 [195/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.969 [196/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:19.969 [197/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:19.969 [198/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.969 [199/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.969 [200/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.969 [201/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:19.969 [202/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:19.969 [203/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.233 [204/707] Linking static target lib/librte_rcu.a 00:02:20.233 [205/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:20.233 [206/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.233 [207/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.233 [208/707] Linking static target lib/librte_gpudev.a 00:02:20.233 [209/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.233 [210/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:20.233 [211/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.233 [212/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.233 [213/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.233 [214/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.233 [215/707] Linking static target lib/librte_gro.a 00:02:20.233 [216/707] Linking static target lib/librte_dmadev.a 00:02:20.233 [217/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:20.233 [218/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:20.233 [219/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.233 [220/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:20.233 [221/707] Linking static target lib/librte_distributor.a 00:02:20.233 [222/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:20.233 [223/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:20.233 [224/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:20.233 [225/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.233 [226/707] Linking static target lib/librte_ip_frag.a 00:02:20.233 [227/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:20.233 [228/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:20.233 [229/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:20.233 [230/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.233 [231/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:20.233 [232/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:20.233 [233/707] Linking static target lib/librte_gso.a 00:02:20.233 [234/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:20.233 [235/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:20.233 [236/707] Linking static target lib/librte_eal.a 00:02:20.233 [237/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:20.233 [238/707] Linking static target lib/librte_regexdev.a 00:02:20.233 [239/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.233 [240/707] Linking static target lib/librte_stack.a 00:02:20.233 [241/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.233 [242/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:20.531 [243/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:20.531 [244/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:20.531 [245/707] Linking static target lib/librte_mldev.a 00:02:20.531 [246/707] Linking static target lib/librte_mbuf.a 00:02:20.531 [247/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.531 [248/707] Linking target lib/librte_telemetry.so.24.0 00:02:20.531 [249/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.531 [250/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:20.531 [251/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:20.531 [252/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.531 [253/707] Linking static target lib/librte_bpf.a 00:02:20.531 [254/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:20.531 [255/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.531 [256/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:20.531 [257/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:20.531 [258/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:20.531 [259/707] Linking static target lib/librte_rawdev.a 00:02:20.531 [260/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:20.531 [261/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:20.531 [262/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.531 [263/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:20.531 [264/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:20.531 [265/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.531 [266/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.531 [267/707] Linking static target lib/librte_pcapng.a 00:02:20.811 [268/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.811 [269/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.811 [270/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.811 [271/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.811 [272/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.811 [273/707] Linking static target lib/librte_power.a 00:02:20.811 [274/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:20.811 [275/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.811 [276/707] Linking static target lib/librte_security.a 00:02:20.811 [277/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:20.811 [278/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.811 [279/707] Linking static target lib/librte_reorder.a 00:02:20.811 [280/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.811 [281/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.811 [282/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:20.811 [283/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:20.811 [284/707] Linking static target lib/librte_lpm.a 00:02:20.812 [285/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:20.812 [286/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:20.812 [287/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:20.812 [288/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.812 [289/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.812 [290/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:20.812 [291/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.812 [292/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.812 [293/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:21.125 [294/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:21.125 [295/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:21.125 [296/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:21.125 [297/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:21.125 [298/707] Linking static target lib/librte_rib.a 00:02:21.125 [299/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:21.125 [300/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:21.125 [301/707] Linking static target lib/librte_efd.a 00:02:21.125 [302/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:21.125 [303/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.125 [304/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.125 [305/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:21.125 [306/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:21.125 [307/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:21.125 [308/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.125 [309/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.399 [310/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.399 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:21.399 [312/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:21.399 [313/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:21.399 [314/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:21.399 [315/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:21.399 [316/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.399 [317/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.399 [318/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:21.399 [319/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:21.399 [320/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:21.399 [321/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:21.399 [322/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.399 [323/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:21.399 [324/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:21.399 [325/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.399 [326/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.399 [327/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:21.399 [328/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:21.399 [329/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:21.399 [330/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.399 [331/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.399 [332/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:21.662 [333/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:21.662 [334/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:21.662 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:21.662 [336/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.662 [337/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:21.662 [338/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.662 [339/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:21.662 [340/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:21.662 [341/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:21.662 [342/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:21.662 [343/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:21.662 [344/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:21.662 [345/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:21.662 [346/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:21.662 [347/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:21.662 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:21.662 [349/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.662 [350/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:21.662 [351/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:21.662 [352/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:21.931 [353/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:21.931 [354/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:21.931 [355/707] Linking static target lib/librte_fib.a 00:02:21.931 [356/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:21.931 [357/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:21.931 [358/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:21.931 [359/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:21.931 [360/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:21.931 [361/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.931 [362/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:21.931 [363/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:21.931 [364/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:21.931 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:21.931 [366/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:21.931 [367/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.931 [368/707] Linking static target lib/librte_pdump.a 00:02:22.191 [369/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:22.191 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.191 [371/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.191 [372/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:22.191 [373/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:22.191 [374/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:22.191 [375/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:22.191 [376/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.191 [377/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.191 [378/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:22.191 [379/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.191 [380/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.457 [381/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.457 [382/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:22.457 [383/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:22.457 [384/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:22.457 [385/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:22.457 [386/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:22.457 [387/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:22.457 [388/707] Linking static target lib/librte_graph.a 00:02:22.457 [389/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:22.457 [390/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:22.457 [391/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.457 [392/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.457 [393/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:22.457 [394/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:22.457 [395/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:22.457 [396/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:22.457 [397/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:22.457 [398/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:22.457 [399/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:22.457 [400/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.727 [401/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.727 [402/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:22.727 [403/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:22.727 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:22.727 [405/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:22.727 [406/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.727 [407/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:22.727 [408/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:22.727 [409/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.727 [410/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:22.727 [411/707] Linking static target drivers/librte_bus_vdev.a 00:02:22.727 [412/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:22.727 [413/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.727 [414/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:22.727 [415/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:22.727 [416/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:22.727 [417/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:22.727 [418/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:22.727 [419/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:22.727 [420/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.989 [421/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.989 [422/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:22.989 [423/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.989 [424/707] Linking static target lib/librte_hash.a 00:02:22.989 [425/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:22.989 [426/707] Linking static target lib/librte_cryptodev.a 00:02:22.989 [427/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:22.989 [428/707] Linking static target lib/librte_sched.a 00:02:22.989 [429/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:22.989 [430/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:22.989 [431/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:22.989 [432/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:22.989 [433/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:22.989 [434/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:22.989 [435/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.989 [436/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:22.989 [437/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.989 [438/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:22.989 [439/707] Linking static target drivers/librte_bus_pci.a 00:02:22.989 [440/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:22.989 [441/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:22.989 [442/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.989 [443/707] Linking static target lib/librte_table.a 00:02:22.989 [444/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:22.989 [445/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:22.989 [446/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:22.989 [447/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:22.989 [448/707] Linking static target lib/librte_ipsec.a 00:02:23.254 [449/707] Linking static target lib/librte_node.a 00:02:23.254 [450/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:23.254 [451/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.254 [452/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.254 [453/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:23.254 [454/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.254 [455/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:23.254 [456/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:23.254 [457/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:23.254 [458/707] Linking static target lib/librte_pdcp.a 00:02:23.254 [459/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:23.518 [460/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:23.518 [461/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:23.518 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:23.518 [463/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:23.519 [464/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:23.519 [465/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:23.519 [466/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:23.519 [467/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:23.519 [468/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:23.519 [469/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:23.519 [470/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.519 [471/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:23.519 [472/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:23.519 [473/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:23.519 [474/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:23.519 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:23.519 [476/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:23.519 [477/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:23.519 [478/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:23.519 [479/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.519 [480/707] Linking static target lib/librte_port.a 00:02:23.779 [481/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:23.779 [482/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.779 [483/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:23.779 [484/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:23.779 [485/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.779 [486/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:23.779 [487/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:23.779 [488/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.779 [489/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:23.779 [490/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.779 [491/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.779 [492/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:23.779 [493/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:23.779 [494/707] Linking static target drivers/librte_mempool_ring.a 00:02:23.779 [495/707] Linking static target lib/librte_member.a 00:02:23.779 [496/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:23.779 [497/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:23.779 [498/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:23.779 [499/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:23.779 [500/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:23.779 [501/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:23.779 [502/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.779 [503/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:23.779 [504/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:23.779 [505/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.779 [506/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.038 [507/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:24.038 [508/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:24.038 [509/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:24.038 [510/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:24.038 [511/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:24.038 [512/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:24.038 [513/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:24.038 [514/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:24.038 [515/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:24.038 [516/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:24.038 [517/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:24.038 [518/707] Linking static target lib/librte_eventdev.a 00:02:24.038 [519/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:24.038 [520/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:24.038 [521/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:24.038 [522/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.038 [523/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:24.038 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:24.298 [525/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:24.298 [526/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.298 [527/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:24.298 [528/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:24.298 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:24.298 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:24.298 [531/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:24.298 [532/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:24.298 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:24.298 [534/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:24.298 [535/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:24.298 [536/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:24.298 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:24.298 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:24.298 [539/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:24.298 [540/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:24.298 [541/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:24.298 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:24.298 [543/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:24.298 [544/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.298 [545/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:24.298 [546/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:24.298 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:24.558 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:24.558 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:24.558 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:24.558 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:24.558 [552/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:24.558 [553/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:24.558 [554/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:24.558 [555/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:24.558 [556/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.558 [557/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:24.558 [558/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:24.558 [559/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:24.558 [560/707] Linking static target lib/librte_ethdev.a 00:02:24.558 [561/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:24.558 [562/707] Linking static target lib/acl/libavx2_tmp.a 00:02:24.558 [563/707] Linking static target lib/librte_acl.a 00:02:24.817 [564/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:24.817 [565/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:24.817 [566/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.817 [567/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:24.817 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:24.817 [569/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:24.817 [570/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:25.076 [571/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:25.076 [572/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.335 [573/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:25.335 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:25.594 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:25.594 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:25.853 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:26.111 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:26.111 [579/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:26.370 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:26.629 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:26.629 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:26.888 [583/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.888 [584/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:27.146 [585/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:27.146 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:27.146 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:27.146 [588/707] Linking static target drivers/librte_net_i40e.a 00:02:28.081 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.081 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.647 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:28.906 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:30.284 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.284 [594/707] Linking target lib/librte_eal.so.24.0 00:02:30.544 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:30.544 [596/707] Linking target lib/librte_meter.so.24.0 00:02:30.544 [597/707] Linking target lib/librte_timer.so.24.0 00:02:30.544 [598/707] Linking target lib/librte_ring.so.24.0 00:02:30.544 [599/707] Linking target lib/librte_dmadev.so.24.0 00:02:30.544 [600/707] Linking target lib/librte_pci.so.24.0 00:02:30.544 [601/707] Linking target lib/librte_cfgfile.so.24.0 00:02:30.544 [602/707] Linking target lib/librte_jobstats.so.24.0 00:02:30.544 [603/707] Linking target lib/librte_rawdev.so.24.0 00:02:30.544 [604/707] Linking target lib/librte_stack.so.24.0 00:02:30.544 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:30.544 [606/707] Linking target lib/librte_acl.so.24.0 00:02:30.804 [607/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:30.804 [608/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:30.804 [609/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:30.804 [610/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:30.804 [611/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:30.804 [612/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:30.804 [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:30.804 [614/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:30.804 [615/707] Linking target lib/librte_rcu.so.24.0 00:02:30.804 [616/707] Linking target lib/librte_mempool.so.24.0 00:02:30.804 [617/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:30.804 [618/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:30.804 [619/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:30.804 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:30.804 [621/707] Linking target lib/librte_rib.so.24.0 00:02:30.804 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:31.064 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:31.064 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:31.064 [625/707] Linking target lib/librte_fib.so.24.0 00:02:31.064 [626/707] Linking target lib/librte_mldev.so.24.0 00:02:31.064 [627/707] Linking target lib/librte_sched.so.24.0 00:02:31.064 [628/707] Linking target lib/librte_net.so.24.0 00:02:31.064 [629/707] Linking target lib/librte_gpudev.so.24.0 00:02:31.064 [630/707] Linking target lib/librte_distributor.so.24.0 00:02:31.064 [631/707] Linking target lib/librte_reorder.so.24.0 00:02:31.064 [632/707] Linking target lib/librte_compressdev.so.24.0 00:02:31.064 [633/707] Linking target lib/librte_regexdev.so.24.0 00:02:31.064 [634/707] Linking target lib/librte_bbdev.so.24.0 00:02:31.064 [635/707] Linking target lib/librte_cryptodev.so.24.0 00:02:31.323 [636/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:31.323 [637/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:31.323 [638/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:31.323 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:31.323 [640/707] Linking target lib/librte_security.so.24.0 00:02:31.323 [641/707] Linking target lib/librte_hash.so.24.0 00:02:31.323 [642/707] Linking target lib/librte_cmdline.so.24.0 00:02:31.323 [643/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:31.323 [644/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:31.582 [645/707] Linking target lib/librte_pdcp.so.24.0 00:02:31.582 [646/707] Linking target lib/librte_lpm.so.24.0 00:02:31.582 [647/707] Linking target lib/librte_efd.so.24.0 00:02:31.582 [648/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.582 [649/707] Linking target lib/librte_member.so.24.0 00:02:31.582 [650/707] Linking target lib/librte_ipsec.so.24.0 00:02:31.582 [651/707] Linking target lib/librte_ethdev.so.24.0 00:02:31.582 [652/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:31.582 [653/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:31.582 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:31.839 [655/707] Linking target lib/librte_metrics.so.24.0 00:02:31.839 [656/707] Linking target lib/librte_pcapng.so.24.0 00:02:31.839 [657/707] Linking target lib/librte_power.so.24.0 00:02:31.839 [658/707] Linking target lib/librte_gso.so.24.0 00:02:31.839 [659/707] Linking target lib/librte_gro.so.24.0 00:02:31.839 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:02:31.839 [661/707] Linking target lib/librte_bpf.so.24.0 00:02:31.839 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:31.839 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:31.839 [664/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:31.839 [665/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:31.839 [666/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:31.839 [667/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:31.839 [668/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:31.839 [669/707] Linking target lib/librte_bitratestats.so.24.0 00:02:31.840 [670/707] Linking target lib/librte_latencystats.so.24.0 00:02:31.840 [671/707] Linking target lib/librte_graph.so.24.0 00:02:31.840 [672/707] Linking target lib/librte_dispatcher.so.24.0 00:02:31.840 [673/707] Linking target lib/librte_pdump.so.24.0 00:02:31.840 [674/707] Linking target lib/librte_port.so.24.0 00:02:32.098 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:32.098 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:32.098 [677/707] Linking target lib/librte_node.so.24.0 00:02:32.098 [678/707] Linking target lib/librte_table.so.24.0 00:02:32.356 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:35.646 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:35.646 [681/707] Linking static target lib/librte_pipeline.a 00:02:35.646 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.646 [683/707] Linking static target lib/librte_vhost.a 00:02:36.212 [684/707] Linking target app/dpdk-test-acl 00:02:36.212 [685/707] Linking target app/dpdk-dumpcap 00:02:36.212 [686/707] Linking target app/dpdk-pdump 00:02:36.212 [687/707] Linking target app/dpdk-test-compress-perf 00:02:36.212 [688/707] Linking target app/dpdk-test-dma-perf 00:02:36.212 [689/707] Linking target app/dpdk-test-regex 00:02:36.212 [690/707] Linking target app/dpdk-test-pipeline 00:02:36.212 [691/707] Linking target app/dpdk-test-security-perf 00:02:36.212 [692/707] Linking target app/dpdk-test-sad 00:02:36.212 [693/707] Linking target app/dpdk-test-mldev 00:02:36.212 [694/707] Linking target app/dpdk-graph 00:02:36.212 [695/707] Linking target app/dpdk-proc-info 00:02:36.212 [696/707] Linking target app/dpdk-test-gpudev 00:02:36.212 [697/707] Linking target app/dpdk-test-fib 00:02:36.212 [698/707] Linking target app/dpdk-test-cmdline 00:02:36.212 [699/707] Linking target app/dpdk-test-bbdev 00:02:36.212 [700/707] Linking target app/dpdk-test-flow-perf 00:02:36.212 [701/707] Linking target app/dpdk-test-crypto-perf 00:02:36.212 [702/707] Linking target app/dpdk-test-eventdev 00:02:36.212 [703/707] Linking target app/dpdk-testpmd 00:02:37.588 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.588 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:40.126 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.126 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:40.126 05:31:13 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:40.126 05:31:13 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:40.126 05:31:13 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:40.126 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:40.126 [0/1] Installing files. 00:02:40.389 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:40.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.393 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.394 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.395 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.395 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.395 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.396 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.659 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.659 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.659 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.659 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:40.659 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.659 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.660 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.661 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.662 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:40.663 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:40.663 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:40.663 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:40.663 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:40.663 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:40.663 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:40.663 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:40.663 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:40.663 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:40.663 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:40.663 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:40.663 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:40.663 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:40.663 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:40.663 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:40.663 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:40.663 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:40.663 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:40.663 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:40.663 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:40.663 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:40.663 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:40.663 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:40.663 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:40.663 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:40.663 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:40.663 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:40.663 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:40.663 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:40.663 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:40.663 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:40.664 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:40.664 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:40.664 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:40.664 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:40.664 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:40.664 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:40.664 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:40.664 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:40.664 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:40.664 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:40.664 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:40.664 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:40.664 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:40.664 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:40.664 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:40.664 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:40.664 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:40.664 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:40.664 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:40.664 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:40.664 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:40.664 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:40.664 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:40.664 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:40.664 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:40.664 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:40.664 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:40.664 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:40.664 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:40.664 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:40.664 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:40.664 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:40.664 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:40.664 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:40.664 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:40.664 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:40.664 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:40.664 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:40.664 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:40.664 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:40.664 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:40.664 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:40.664 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:40.664 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:40.664 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:40.664 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:40.664 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:40.664 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:40.664 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:40.664 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:40.664 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:40.664 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:40.664 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:40.664 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:40.664 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:40.664 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:40.664 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:40.664 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:40.664 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:40.664 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:40.664 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:40.664 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:40.664 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:40.664 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:40.664 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:40.664 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:40.664 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:40.664 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:40.664 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:40.664 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:40.664 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:40.664 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:40.664 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:40.664 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:40.664 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:40.664 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:40.664 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:40.664 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:40.664 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:40.664 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:40.664 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:40.664 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:40.664 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:40.664 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:40.664 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:40.664 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:40.664 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:40.664 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:40.665 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:40.665 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:40.665 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:40.665 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:40.665 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:40.665 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:40.665 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:40.665 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:40.665 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:40.665 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:40.665 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:40.665 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:40.665 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:40.665 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:40.665 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:40.665 05:31:14 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:40.665 05:31:14 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.665 00:02:40.665 real 0m29.055s 00:02:40.665 user 9m33.830s 00:02:40.665 sys 2m9.365s 00:02:40.665 05:31:14 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:40.665 05:31:14 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:40.665 ************************************ 00:02:40.665 END TEST build_native_dpdk 00:02:40.665 ************************************ 00:02:40.665 05:31:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:40.665 05:31:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:40.665 05:31:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:40.665 05:31:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:40.665 05:31:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:40.665 05:31:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:40.665 05:31:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:40.665 05:31:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:40.924 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:40.924 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.924 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.924 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:41.489 Using 'verbs' RDMA provider 00:02:54.265 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:06.482 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:06.482 Creating mk/config.mk...done. 00:03:06.482 Creating mk/cc.flags.mk...done. 00:03:06.482 Type 'make' to build. 00:03:06.482 05:31:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:06.482 05:31:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:06.482 05:31:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:06.482 05:31:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.482 ************************************ 00:03:06.482 START TEST make 00:03:06.482 ************************************ 00:03:06.482 05:31:39 make -- common/autotest_common.sh@1125 -- $ make -j96 00:03:06.482 make[1]: Nothing to be done for 'all'. 00:03:07.873 The Meson build system 00:03:07.873 Version: 1.5.0 00:03:07.873 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:07.873 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.873 Build type: native build 00:03:07.873 Project name: libvfio-user 00:03:07.873 Project version: 0.0.1 00:03:07.873 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:07.873 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:07.873 Host machine cpu family: x86_64 00:03:07.873 Host machine cpu: x86_64 00:03:07.873 Run-time dependency threads found: YES 00:03:07.873 Library dl found: YES 00:03:07.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:07.873 Run-time dependency json-c found: YES 0.17 00:03:07.873 Run-time dependency cmocka found: YES 1.1.7 00:03:07.873 Program pytest-3 found: NO 00:03:07.873 Program flake8 found: NO 00:03:07.873 Program misspell-fixer found: NO 00:03:07.873 Program restructuredtext-lint found: NO 00:03:07.873 Program valgrind found: YES (/usr/bin/valgrind) 00:03:07.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.873 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.873 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.873 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:07.873 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:07.873 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.873 Build targets in project: 8 00:03:07.873 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:07.873 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:07.873 00:03:07.873 libvfio-user 0.0.1 00:03:07.873 00:03:07.873 User defined options 00:03:07.873 buildtype : debug 00:03:07.873 default_library: shared 00:03:07.873 libdir : /usr/local/lib 00:03:07.873 00:03:07.873 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.807 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.807 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:08.807 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:08.807 [3/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:08.807 [4/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:08.807 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:08.807 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:08.807 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:08.807 [8/37] Compiling C object samples/null.p/null.c.o 00:03:08.807 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:08.807 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:08.807 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:08.807 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:08.807 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:08.807 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.807 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:08.807 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:08.807 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:08.807 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:08.807 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:08.807 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.807 [21/37] Compiling C object samples/server.p/server.c.o 00:03:08.807 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.807 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:08.807 [24/37] Compiling C object samples/client.p/client.c.o 00:03:08.807 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:08.807 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:08.807 [27/37] Linking target samples/client 00:03:08.807 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:08.807 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.807 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:08.807 [31/37] Linking target test/unit_tests 00:03:09.066 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:09.066 [33/37] Linking target samples/lspci 00:03:09.066 [34/37] Linking target samples/server 00:03:09.066 [35/37] Linking target samples/null 00:03:09.066 [36/37] Linking target samples/gpio-pci-idio-16 00:03:09.066 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:09.066 INFO: autodetecting backend as ninja 00:03:09.066 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.066 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.633 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:09.633 ninja: no work to do. 00:03:36.181 CC lib/ut/ut.o 00:03:36.181 CC lib/ut_mock/mock.o 00:03:36.181 CC lib/log/log.o 00:03:36.181 CC lib/log/log_flags.o 00:03:36.181 CC lib/log/log_deprecated.o 00:03:36.181 LIB libspdk_ut.a 00:03:36.181 LIB libspdk_ut_mock.a 00:03:36.181 SO libspdk_ut.so.2.0 00:03:36.181 LIB libspdk_log.a 00:03:36.181 SO libspdk_ut_mock.so.6.0 00:03:36.181 SO libspdk_log.so.7.0 00:03:36.181 SYMLINK libspdk_ut.so 00:03:36.181 SYMLINK libspdk_ut_mock.so 00:03:36.181 SYMLINK libspdk_log.so 00:03:36.440 CC lib/util/bit_array.o 00:03:36.440 CC lib/util/base64.o 00:03:36.440 CC lib/util/cpuset.o 00:03:36.440 CC lib/util/crc16.o 00:03:36.440 CC lib/util/crc32c.o 00:03:36.440 CC lib/util/crc32.o 00:03:36.440 CC lib/util/crc64.o 00:03:36.440 CC lib/util/crc32_ieee.o 00:03:36.440 CC lib/util/dif.o 00:03:36.440 CC lib/util/fd.o 00:03:36.440 CC lib/util/fd_group.o 00:03:36.440 CC lib/util/file.o 00:03:36.440 CXX lib/trace_parser/trace.o 00:03:36.440 CC lib/util/hexlify.o 00:03:36.440 CC lib/util/iov.o 00:03:36.440 CC lib/util/math.o 00:03:36.440 CC lib/util/net.o 00:03:36.440 CC lib/util/strerror_tls.o 00:03:36.440 CC lib/util/pipe.o 00:03:36.440 CC lib/util/string.o 00:03:36.440 CC lib/util/uuid.o 00:03:36.440 CC lib/util/xor.o 00:03:36.440 CC lib/util/zipf.o 00:03:36.440 CC lib/util/md5.o 00:03:36.440 CC lib/dma/dma.o 00:03:36.440 CC lib/ioat/ioat.o 00:03:36.698 CC lib/vfio_user/host/vfio_user_pci.o 00:03:36.698 CC lib/vfio_user/host/vfio_user.o 00:03:36.698 LIB libspdk_dma.a 00:03:36.698 SO libspdk_dma.so.5.0 00:03:36.698 LIB libspdk_ioat.a 00:03:36.698 SYMLINK libspdk_dma.so 00:03:36.698 SO libspdk_ioat.so.7.0 00:03:36.957 SYMLINK libspdk_ioat.so 00:03:36.957 LIB libspdk_vfio_user.a 00:03:36.957 SO libspdk_vfio_user.so.5.0 00:03:36.957 LIB libspdk_util.a 00:03:36.957 SYMLINK libspdk_vfio_user.so 00:03:36.957 SO libspdk_util.so.10.0 00:03:37.215 SYMLINK libspdk_util.so 00:03:37.215 LIB libspdk_trace_parser.a 00:03:37.215 SO libspdk_trace_parser.so.6.0 00:03:37.215 SYMLINK libspdk_trace_parser.so 00:03:37.474 CC lib/rdma_utils/rdma_utils.o 00:03:37.474 CC lib/json/json_parse.o 00:03:37.474 CC lib/json/json_util.o 00:03:37.474 CC lib/json/json_write.o 00:03:37.474 CC lib/conf/conf.o 00:03:37.474 CC lib/rdma_provider/common.o 00:03:37.474 CC lib/idxd/idxd.o 00:03:37.474 CC lib/idxd/idxd_user.o 00:03:37.474 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:37.474 CC lib/idxd/idxd_kernel.o 00:03:37.474 CC lib/vmd/vmd.o 00:03:37.474 CC lib/env_dpdk/env.o 00:03:37.474 CC lib/vmd/led.o 00:03:37.474 CC lib/env_dpdk/memory.o 00:03:37.474 CC lib/env_dpdk/pci.o 00:03:37.474 CC lib/env_dpdk/init.o 00:03:37.474 CC lib/env_dpdk/threads.o 00:03:37.474 CC lib/env_dpdk/pci_ioat.o 00:03:37.474 CC lib/env_dpdk/pci_virtio.o 00:03:37.474 CC lib/env_dpdk/pci_vmd.o 00:03:37.474 CC lib/env_dpdk/pci_idxd.o 00:03:37.474 CC lib/env_dpdk/pci_event.o 00:03:37.474 CC lib/env_dpdk/sigbus_handler.o 00:03:37.474 CC lib/env_dpdk/pci_dpdk.o 00:03:37.474 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:37.474 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:37.733 LIB libspdk_rdma_provider.a 00:03:37.733 SO libspdk_rdma_provider.so.6.0 00:03:37.733 LIB libspdk_conf.a 00:03:37.733 LIB libspdk_rdma_utils.a 00:03:37.733 SO libspdk_conf.so.6.0 00:03:37.733 LIB libspdk_json.a 00:03:37.733 SO libspdk_rdma_utils.so.1.0 00:03:37.733 SYMLINK libspdk_rdma_provider.so 00:03:37.733 SO libspdk_json.so.6.0 00:03:37.733 SYMLINK libspdk_conf.so 00:03:37.733 SYMLINK libspdk_rdma_utils.so 00:03:37.733 SYMLINK libspdk_json.so 00:03:37.990 LIB libspdk_idxd.a 00:03:37.990 SO libspdk_idxd.so.12.1 00:03:37.990 LIB libspdk_vmd.a 00:03:37.990 SO libspdk_vmd.so.6.0 00:03:37.990 SYMLINK libspdk_idxd.so 00:03:37.990 SYMLINK libspdk_vmd.so 00:03:37.990 CC lib/jsonrpc/jsonrpc_server.o 00:03:37.990 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:37.990 CC lib/jsonrpc/jsonrpc_client.o 00:03:37.990 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:38.247 LIB libspdk_jsonrpc.a 00:03:38.247 SO libspdk_jsonrpc.so.6.0 00:03:38.505 SYMLINK libspdk_jsonrpc.so 00:03:38.505 LIB libspdk_env_dpdk.a 00:03:38.505 SO libspdk_env_dpdk.so.15.0 00:03:38.505 SYMLINK libspdk_env_dpdk.so 00:03:38.762 CC lib/rpc/rpc.o 00:03:38.762 LIB libspdk_rpc.a 00:03:38.762 SO libspdk_rpc.so.6.0 00:03:39.019 SYMLINK libspdk_rpc.so 00:03:39.277 CC lib/keyring/keyring.o 00:03:39.277 CC lib/keyring/keyring_rpc.o 00:03:39.277 CC lib/trace/trace.o 00:03:39.277 CC lib/trace/trace_flags.o 00:03:39.277 CC lib/trace/trace_rpc.o 00:03:39.277 CC lib/notify/notify.o 00:03:39.277 CC lib/notify/notify_rpc.o 00:03:39.277 LIB libspdk_notify.a 00:03:39.534 LIB libspdk_keyring.a 00:03:39.534 LIB libspdk_trace.a 00:03:39.534 SO libspdk_notify.so.6.0 00:03:39.534 SO libspdk_keyring.so.2.0 00:03:39.534 SO libspdk_trace.so.11.0 00:03:39.534 SYMLINK libspdk_notify.so 00:03:39.534 SYMLINK libspdk_keyring.so 00:03:39.534 SYMLINK libspdk_trace.so 00:03:39.791 CC lib/thread/thread.o 00:03:39.791 CC lib/thread/iobuf.o 00:03:39.791 CC lib/sock/sock.o 00:03:39.791 CC lib/sock/sock_rpc.o 00:03:40.048 LIB libspdk_sock.a 00:03:40.306 SO libspdk_sock.so.10.0 00:03:40.306 SYMLINK libspdk_sock.so 00:03:40.563 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:40.563 CC lib/nvme/nvme_fabric.o 00:03:40.563 CC lib/nvme/nvme_ctrlr.o 00:03:40.563 CC lib/nvme/nvme_ns.o 00:03:40.563 CC lib/nvme/nvme_ns_cmd.o 00:03:40.563 CC lib/nvme/nvme_pcie.o 00:03:40.563 CC lib/nvme/nvme_qpair.o 00:03:40.563 CC lib/nvme/nvme_pcie_common.o 00:03:40.563 CC lib/nvme/nvme.o 00:03:40.563 CC lib/nvme/nvme_transport.o 00:03:40.563 CC lib/nvme/nvme_quirks.o 00:03:40.563 CC lib/nvme/nvme_discovery.o 00:03:40.563 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:40.563 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.563 CC lib/nvme/nvme_tcp.o 00:03:40.563 CC lib/nvme/nvme_opal.o 00:03:40.563 CC lib/nvme/nvme_io_msg.o 00:03:40.563 CC lib/nvme/nvme_zns.o 00:03:40.563 CC lib/nvme/nvme_poll_group.o 00:03:40.563 CC lib/nvme/nvme_stubs.o 00:03:40.563 CC lib/nvme/nvme_auth.o 00:03:40.563 CC lib/nvme/nvme_vfio_user.o 00:03:40.563 CC lib/nvme/nvme_cuse.o 00:03:40.563 CC lib/nvme/nvme_rdma.o 00:03:40.821 LIB libspdk_thread.a 00:03:41.078 SO libspdk_thread.so.10.1 00:03:41.078 SYMLINK libspdk_thread.so 00:03:41.336 CC lib/blob/blobstore.o 00:03:41.336 CC lib/blob/request.o 00:03:41.336 CC lib/blob/zeroes.o 00:03:41.336 CC lib/blob/blob_bs_dev.o 00:03:41.336 CC lib/fsdev/fsdev_io.o 00:03:41.336 CC lib/fsdev/fsdev.o 00:03:41.336 CC lib/fsdev/fsdev_rpc.o 00:03:41.336 CC lib/accel/accel.o 00:03:41.336 CC lib/accel/accel_rpc.o 00:03:41.336 CC lib/accel/accel_sw.o 00:03:41.336 CC lib/init/json_config.o 00:03:41.336 CC lib/init/subsystem.o 00:03:41.336 CC lib/init/rpc.o 00:03:41.336 CC lib/init/subsystem_rpc.o 00:03:41.336 CC lib/vfu_tgt/tgt_endpoint.o 00:03:41.336 CC lib/vfu_tgt/tgt_rpc.o 00:03:41.336 CC lib/virtio/virtio.o 00:03:41.336 CC lib/virtio/virtio_vfio_user.o 00:03:41.336 CC lib/virtio/virtio_vhost_user.o 00:03:41.336 CC lib/virtio/virtio_pci.o 00:03:41.592 LIB libspdk_init.a 00:03:41.592 SO libspdk_init.so.6.0 00:03:41.592 LIB libspdk_vfu_tgt.a 00:03:41.592 LIB libspdk_virtio.a 00:03:41.592 SYMLINK libspdk_init.so 00:03:41.592 SO libspdk_vfu_tgt.so.3.0 00:03:41.592 SO libspdk_virtio.so.7.0 00:03:41.592 SYMLINK libspdk_virtio.so 00:03:41.592 SYMLINK libspdk_vfu_tgt.so 00:03:41.849 LIB libspdk_fsdev.a 00:03:41.849 SO libspdk_fsdev.so.1.0 00:03:41.849 CC lib/event/app.o 00:03:41.849 CC lib/event/reactor.o 00:03:41.849 CC lib/event/log_rpc.o 00:03:41.849 CC lib/event/app_rpc.o 00:03:41.849 CC lib/event/scheduler_static.o 00:03:41.849 SYMLINK libspdk_fsdev.so 00:03:42.107 LIB libspdk_accel.a 00:03:42.107 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:42.107 LIB libspdk_nvme.a 00:03:42.107 SO libspdk_accel.so.16.0 00:03:42.107 LIB libspdk_event.a 00:03:42.363 SYMLINK libspdk_accel.so 00:03:42.363 SO libspdk_event.so.14.0 00:03:42.363 SO libspdk_nvme.so.14.0 00:03:42.363 SYMLINK libspdk_event.so 00:03:42.363 SYMLINK libspdk_nvme.so 00:03:42.621 CC lib/bdev/bdev.o 00:03:42.621 CC lib/bdev/bdev_rpc.o 00:03:42.621 CC lib/bdev/bdev_zone.o 00:03:42.621 CC lib/bdev/part.o 00:03:42.621 CC lib/bdev/scsi_nvme.o 00:03:42.621 LIB libspdk_fuse_dispatcher.a 00:03:42.621 SO libspdk_fuse_dispatcher.so.1.0 00:03:42.878 SYMLINK libspdk_fuse_dispatcher.so 00:03:43.443 LIB libspdk_blob.a 00:03:43.443 SO libspdk_blob.so.11.0 00:03:43.443 SYMLINK libspdk_blob.so 00:03:43.700 CC lib/blobfs/blobfs.o 00:03:43.700 CC lib/blobfs/tree.o 00:03:43.700 CC lib/lvol/lvol.o 00:03:44.302 LIB libspdk_bdev.a 00:03:44.302 SO libspdk_bdev.so.16.0 00:03:44.302 LIB libspdk_blobfs.a 00:03:44.583 SO libspdk_blobfs.so.10.0 00:03:44.583 SYMLINK libspdk_bdev.so 00:03:44.583 LIB libspdk_lvol.a 00:03:44.583 SO libspdk_lvol.so.10.0 00:03:44.583 SYMLINK libspdk_blobfs.so 00:03:44.583 SYMLINK libspdk_lvol.so 00:03:44.875 CC lib/ftl/ftl_core.o 00:03:44.875 CC lib/ftl/ftl_init.o 00:03:44.875 CC lib/ftl/ftl_layout.o 00:03:44.875 CC lib/ftl/ftl_debug.o 00:03:44.875 CC lib/ftl/ftl_io.o 00:03:44.875 CC lib/ftl/ftl_l2p_flat.o 00:03:44.875 CC lib/ftl/ftl_sb.o 00:03:44.875 CC lib/ftl/ftl_l2p.o 00:03:44.875 CC lib/ftl/ftl_band.o 00:03:44.875 CC lib/ftl/ftl_nv_cache.o 00:03:44.875 CC lib/scsi/dev.o 00:03:44.875 CC lib/ftl/ftl_band_ops.o 00:03:44.875 CC lib/ftl/ftl_writer.o 00:03:44.875 CC lib/scsi/lun.o 00:03:44.875 CC lib/ftl/ftl_rq.o 00:03:44.875 CC lib/scsi/port.o 00:03:44.875 CC lib/ftl/ftl_reloc.o 00:03:44.875 CC lib/scsi/scsi.o 00:03:44.875 CC lib/scsi/scsi_pr.o 00:03:44.875 CC lib/ftl/ftl_l2p_cache.o 00:03:44.875 CC lib/scsi/scsi_bdev.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt.o 00:03:44.875 CC lib/ftl/ftl_p2l.o 00:03:44.875 CC lib/scsi/scsi_rpc.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:44.875 CC lib/ftl/ftl_p2l_log.o 00:03:44.875 CC lib/scsi/task.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:44.875 CC lib/nvmf/ctrlr.o 00:03:44.875 CC lib/nvmf/ctrlr_discovery.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:44.875 CC lib/nbd/nbd.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:44.875 CC lib/nbd/nbd_rpc.o 00:03:44.875 CC lib/nvmf/ctrlr_bdev.o 00:03:44.875 CC lib/nvmf/nvmf.o 00:03:44.875 CC lib/nvmf/subsystem.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:44.875 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:44.875 CC lib/nvmf/nvmf_rpc.o 00:03:44.875 CC lib/ftl/utils/ftl_conf.o 00:03:44.875 CC lib/nvmf/transport.o 00:03:44.875 CC lib/nvmf/tcp.o 00:03:44.875 CC lib/ftl/utils/ftl_md.o 00:03:44.875 CC lib/nvmf/stubs.o 00:03:44.875 CC lib/ftl/utils/ftl_mempool.o 00:03:44.875 CC lib/ftl/utils/ftl_property.o 00:03:44.875 CC lib/nvmf/mdns_server.o 00:03:44.875 CC lib/ublk/ublk.o 00:03:44.875 CC lib/nvmf/vfio_user.o 00:03:44.875 CC lib/ublk/ublk_rpc.o 00:03:44.875 CC lib/ftl/utils/ftl_bitmap.o 00:03:44.875 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:44.875 CC lib/nvmf/rdma.o 00:03:44.875 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:44.875 CC lib/nvmf/auth.o 00:03:44.875 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:44.875 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:44.875 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:44.875 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:44.875 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:44.875 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:44.875 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:44.875 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.875 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.875 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:44.875 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:44.876 CC lib/ftl/base/ftl_base_dev.o 00:03:44.876 CC lib/ftl/base/ftl_base_bdev.o 00:03:44.876 CC lib/ftl/ftl_trace.o 00:03:45.486 LIB libspdk_nbd.a 00:03:45.486 SO libspdk_nbd.so.7.0 00:03:45.486 SYMLINK libspdk_nbd.so 00:03:45.486 LIB libspdk_scsi.a 00:03:45.486 SO libspdk_scsi.so.9.0 00:03:45.486 LIB libspdk_ublk.a 00:03:45.486 SO libspdk_ublk.so.3.0 00:03:45.486 SYMLINK libspdk_scsi.so 00:03:45.745 LIB libspdk_ftl.a 00:03:45.745 SYMLINK libspdk_ublk.so 00:03:45.745 SO libspdk_ftl.so.9.0 00:03:46.003 CC lib/iscsi/init_grp.o 00:03:46.003 CC lib/iscsi/conn.o 00:03:46.003 CC lib/iscsi/iscsi.o 00:03:46.003 CC lib/iscsi/param.o 00:03:46.003 CC lib/iscsi/portal_grp.o 00:03:46.003 CC lib/iscsi/tgt_node.o 00:03:46.003 CC lib/iscsi/iscsi_subsystem.o 00:03:46.003 CC lib/iscsi/iscsi_rpc.o 00:03:46.003 CC lib/iscsi/task.o 00:03:46.003 CC lib/vhost/vhost.o 00:03:46.003 CC lib/vhost/vhost_rpc.o 00:03:46.003 CC lib/vhost/vhost_scsi.o 00:03:46.003 CC lib/vhost/vhost_blk.o 00:03:46.003 CC lib/vhost/rte_vhost_user.o 00:03:46.003 SYMLINK libspdk_ftl.so 00:03:46.570 LIB libspdk_nvmf.a 00:03:46.570 SO libspdk_nvmf.so.19.0 00:03:46.570 LIB libspdk_vhost.a 00:03:46.829 SO libspdk_vhost.so.8.0 00:03:46.829 SYMLINK libspdk_nvmf.so 00:03:46.829 SYMLINK libspdk_vhost.so 00:03:46.829 LIB libspdk_iscsi.a 00:03:47.088 SO libspdk_iscsi.so.8.0 00:03:47.088 SYMLINK libspdk_iscsi.so 00:03:47.656 CC module/vfu_device/vfu_virtio.o 00:03:47.656 CC module/vfu_device/vfu_virtio_blk.o 00:03:47.656 CC module/vfu_device/vfu_virtio_scsi.o 00:03:47.656 CC module/vfu_device/vfu_virtio_fs.o 00:03:47.656 CC module/vfu_device/vfu_virtio_rpc.o 00:03:47.656 CC module/env_dpdk/env_dpdk_rpc.o 00:03:47.656 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:47.656 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:47.656 CC module/sock/posix/posix.o 00:03:47.656 CC module/accel/dsa/accel_dsa.o 00:03:47.656 CC module/accel/iaa/accel_iaa.o 00:03:47.656 CC module/accel/iaa/accel_iaa_rpc.o 00:03:47.656 CC module/accel/dsa/accel_dsa_rpc.o 00:03:47.656 CC module/accel/ioat/accel_ioat.o 00:03:47.656 CC module/accel/ioat/accel_ioat_rpc.o 00:03:47.656 CC module/keyring/linux/keyring.o 00:03:47.656 CC module/blob/bdev/blob_bdev.o 00:03:47.656 CC module/keyring/linux/keyring_rpc.o 00:03:47.656 LIB libspdk_env_dpdk_rpc.a 00:03:47.656 CC module/scheduler/gscheduler/gscheduler.o 00:03:47.656 CC module/accel/error/accel_error.o 00:03:47.656 CC module/accel/error/accel_error_rpc.o 00:03:47.656 CC module/keyring/file/keyring.o 00:03:47.656 CC module/keyring/file/keyring_rpc.o 00:03:47.656 CC module/fsdev/aio/fsdev_aio.o 00:03:47.656 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:47.656 CC module/fsdev/aio/linux_aio_mgr.o 00:03:47.656 SO libspdk_env_dpdk_rpc.so.6.0 00:03:47.656 SYMLINK libspdk_env_dpdk_rpc.so 00:03:47.914 LIB libspdk_scheduler_dpdk_governor.a 00:03:47.914 LIB libspdk_keyring_linux.a 00:03:47.914 LIB libspdk_scheduler_gscheduler.a 00:03:47.914 LIB libspdk_keyring_file.a 00:03:47.914 SO libspdk_keyring_linux.so.1.0 00:03:47.915 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:47.915 LIB libspdk_scheduler_dynamic.a 00:03:47.915 LIB libspdk_accel_ioat.a 00:03:47.915 SO libspdk_scheduler_gscheduler.so.4.0 00:03:47.915 SO libspdk_keyring_file.so.2.0 00:03:47.915 LIB libspdk_accel_error.a 00:03:47.915 SO libspdk_scheduler_dynamic.so.4.0 00:03:47.915 SO libspdk_accel_error.so.2.0 00:03:47.915 LIB libspdk_accel_iaa.a 00:03:47.915 SO libspdk_accel_ioat.so.6.0 00:03:47.915 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:47.915 SYMLINK libspdk_keyring_linux.so 00:03:47.915 SYMLINK libspdk_scheduler_gscheduler.so 00:03:47.915 LIB libspdk_accel_dsa.a 00:03:47.915 SO libspdk_accel_iaa.so.3.0 00:03:47.915 SYMLINK libspdk_keyring_file.so 00:03:47.915 SYMLINK libspdk_accel_ioat.so 00:03:47.915 SYMLINK libspdk_scheduler_dynamic.so 00:03:47.915 LIB libspdk_blob_bdev.a 00:03:47.915 SYMLINK libspdk_accel_error.so 00:03:47.915 SO libspdk_accel_dsa.so.5.0 00:03:47.915 SYMLINK libspdk_accel_iaa.so 00:03:47.915 SO libspdk_blob_bdev.so.11.0 00:03:48.173 SYMLINK libspdk_accel_dsa.so 00:03:48.173 SYMLINK libspdk_blob_bdev.so 00:03:48.173 LIB libspdk_vfu_device.a 00:03:48.173 SO libspdk_vfu_device.so.3.0 00:03:48.173 SYMLINK libspdk_vfu_device.so 00:03:48.173 LIB libspdk_fsdev_aio.a 00:03:48.173 SO libspdk_fsdev_aio.so.1.0 00:03:48.173 LIB libspdk_sock_posix.a 00:03:48.431 SO libspdk_sock_posix.so.6.0 00:03:48.431 SYMLINK libspdk_fsdev_aio.so 00:03:48.431 SYMLINK libspdk_sock_posix.so 00:03:48.431 CC module/bdev/lvol/vbdev_lvol.o 00:03:48.431 CC module/bdev/error/vbdev_error.o 00:03:48.431 CC module/bdev/error/vbdev_error_rpc.o 00:03:48.431 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:48.431 CC module/bdev/malloc/bdev_malloc.o 00:03:48.431 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:48.431 CC module/bdev/gpt/gpt.o 00:03:48.431 CC module/bdev/gpt/vbdev_gpt.o 00:03:48.431 CC module/bdev/split/vbdev_split.o 00:03:48.431 CC module/bdev/split/vbdev_split_rpc.o 00:03:48.431 CC module/bdev/null/bdev_null.o 00:03:48.431 CC module/bdev/null/bdev_null_rpc.o 00:03:48.431 CC module/bdev/nvme/bdev_nvme.o 00:03:48.431 CC module/bdev/aio/bdev_aio.o 00:03:48.431 CC module/bdev/nvme/nvme_rpc.o 00:03:48.431 CC module/bdev/aio/bdev_aio_rpc.o 00:03:48.431 CC module/bdev/ftl/bdev_ftl.o 00:03:48.431 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:48.431 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:48.431 CC module/bdev/raid/bdev_raid.o 00:03:48.431 CC module/bdev/iscsi/bdev_iscsi.o 00:03:48.431 CC module/bdev/raid/bdev_raid_rpc.o 00:03:48.431 CC module/bdev/nvme/bdev_mdns_client.o 00:03:48.431 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:48.431 CC module/bdev/passthru/vbdev_passthru.o 00:03:48.431 CC module/bdev/raid/bdev_raid_sb.o 00:03:48.431 CC module/bdev/raid/raid0.o 00:03:48.431 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:48.431 CC module/bdev/nvme/vbdev_opal.o 00:03:48.431 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:48.431 CC module/bdev/raid/raid1.o 00:03:48.431 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:48.432 CC module/bdev/raid/concat.o 00:03:48.432 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:48.432 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:48.432 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:48.432 CC module/blobfs/bdev/blobfs_bdev.o 00:03:48.432 CC module/bdev/delay/vbdev_delay.o 00:03:48.432 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:48.432 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:48.432 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:48.432 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:48.690 LIB libspdk_bdev_split.a 00:03:48.690 LIB libspdk_blobfs_bdev.a 00:03:48.949 LIB libspdk_bdev_gpt.a 00:03:48.949 SO libspdk_bdev_split.so.6.0 00:03:48.949 SO libspdk_blobfs_bdev.so.6.0 00:03:48.949 LIB libspdk_bdev_null.a 00:03:48.949 LIB libspdk_bdev_error.a 00:03:48.949 SO libspdk_bdev_gpt.so.6.0 00:03:48.949 SO libspdk_bdev_null.so.6.0 00:03:48.949 SO libspdk_bdev_error.so.6.0 00:03:48.949 LIB libspdk_bdev_ftl.a 00:03:48.949 SYMLINK libspdk_bdev_split.so 00:03:48.949 LIB libspdk_bdev_zone_block.a 00:03:48.949 LIB libspdk_bdev_passthru.a 00:03:48.949 SYMLINK libspdk_blobfs_bdev.so 00:03:48.949 LIB libspdk_bdev_iscsi.a 00:03:48.949 SYMLINK libspdk_bdev_gpt.so 00:03:48.949 SYMLINK libspdk_bdev_null.so 00:03:48.949 SO libspdk_bdev_ftl.so.6.0 00:03:48.949 LIB libspdk_bdev_aio.a 00:03:48.949 SO libspdk_bdev_zone_block.so.6.0 00:03:48.949 SO libspdk_bdev_iscsi.so.6.0 00:03:48.949 LIB libspdk_bdev_malloc.a 00:03:48.949 SO libspdk_bdev_passthru.so.6.0 00:03:48.949 SYMLINK libspdk_bdev_error.so 00:03:48.949 SO libspdk_bdev_aio.so.6.0 00:03:48.949 SO libspdk_bdev_malloc.so.6.0 00:03:48.949 LIB libspdk_bdev_delay.a 00:03:48.949 SYMLINK libspdk_bdev_ftl.so 00:03:48.949 SYMLINK libspdk_bdev_zone_block.so 00:03:48.949 SYMLINK libspdk_bdev_iscsi.so 00:03:48.949 SYMLINK libspdk_bdev_passthru.so 00:03:48.949 LIB libspdk_bdev_lvol.a 00:03:48.949 SO libspdk_bdev_delay.so.6.0 00:03:48.949 SYMLINK libspdk_bdev_aio.so 00:03:48.949 SYMLINK libspdk_bdev_malloc.so 00:03:48.949 SO libspdk_bdev_lvol.so.6.0 00:03:48.949 SYMLINK libspdk_bdev_delay.so 00:03:48.949 LIB libspdk_bdev_virtio.a 00:03:49.208 SO libspdk_bdev_virtio.so.6.0 00:03:49.208 SYMLINK libspdk_bdev_lvol.so 00:03:49.208 SYMLINK libspdk_bdev_virtio.so 00:03:49.466 LIB libspdk_bdev_raid.a 00:03:49.466 SO libspdk_bdev_raid.so.6.0 00:03:49.466 SYMLINK libspdk_bdev_raid.so 00:03:50.402 LIB libspdk_bdev_nvme.a 00:03:50.402 SO libspdk_bdev_nvme.so.7.0 00:03:50.402 SYMLINK libspdk_bdev_nvme.so 00:03:50.969 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:50.969 CC module/event/subsystems/fsdev/fsdev.o 00:03:50.969 CC module/event/subsystems/keyring/keyring.o 00:03:50.969 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.969 CC module/event/subsystems/vmd/vmd.o 00:03:50.969 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.969 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.969 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.969 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.969 CC module/event/subsystems/sock/sock.o 00:03:50.969 LIB libspdk_event_fsdev.a 00:03:50.969 LIB libspdk_event_vfu_tgt.a 00:03:50.969 LIB libspdk_event_keyring.a 00:03:50.969 LIB libspdk_event_vhost_blk.a 00:03:50.969 SO libspdk_event_fsdev.so.1.0 00:03:50.969 LIB libspdk_event_sock.a 00:03:51.229 LIB libspdk_event_scheduler.a 00:03:51.229 LIB libspdk_event_vmd.a 00:03:51.229 SO libspdk_event_keyring.so.1.0 00:03:51.229 SO libspdk_event_vfu_tgt.so.3.0 00:03:51.229 LIB libspdk_event_iobuf.a 00:03:51.229 SO libspdk_event_vhost_blk.so.3.0 00:03:51.229 SO libspdk_event_scheduler.so.4.0 00:03:51.229 SO libspdk_event_sock.so.5.0 00:03:51.229 SO libspdk_event_vmd.so.6.0 00:03:51.229 SO libspdk_event_iobuf.so.3.0 00:03:51.229 SYMLINK libspdk_event_fsdev.so 00:03:51.229 SYMLINK libspdk_event_vfu_tgt.so 00:03:51.229 SYMLINK libspdk_event_keyring.so 00:03:51.229 SYMLINK libspdk_event_vhost_blk.so 00:03:51.229 SYMLINK libspdk_event_scheduler.so 00:03:51.229 SYMLINK libspdk_event_sock.so 00:03:51.229 SYMLINK libspdk_event_vmd.so 00:03:51.229 SYMLINK libspdk_event_iobuf.so 00:03:51.486 CC module/event/subsystems/accel/accel.o 00:03:51.486 LIB libspdk_event_accel.a 00:03:51.744 SO libspdk_event_accel.so.6.0 00:03:51.744 SYMLINK libspdk_event_accel.so 00:03:52.002 CC module/event/subsystems/bdev/bdev.o 00:03:52.261 LIB libspdk_event_bdev.a 00:03:52.261 SO libspdk_event_bdev.so.6.0 00:03:52.261 SYMLINK libspdk_event_bdev.so 00:03:52.519 CC module/event/subsystems/nbd/nbd.o 00:03:52.519 CC module/event/subsystems/scsi/scsi.o 00:03:52.519 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:52.519 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:52.519 CC module/event/subsystems/ublk/ublk.o 00:03:52.519 LIB libspdk_event_nbd.a 00:03:52.778 LIB libspdk_event_scsi.a 00:03:52.778 LIB libspdk_event_ublk.a 00:03:52.778 SO libspdk_event_nbd.so.6.0 00:03:52.778 SO libspdk_event_scsi.so.6.0 00:03:52.778 SO libspdk_event_ublk.so.3.0 00:03:52.778 LIB libspdk_event_nvmf.a 00:03:52.778 SYMLINK libspdk_event_nbd.so 00:03:52.778 SYMLINK libspdk_event_scsi.so 00:03:52.778 SO libspdk_event_nvmf.so.6.0 00:03:52.778 SYMLINK libspdk_event_ublk.so 00:03:52.778 SYMLINK libspdk_event_nvmf.so 00:03:53.037 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:53.037 CC module/event/subsystems/iscsi/iscsi.o 00:03:53.296 LIB libspdk_event_vhost_scsi.a 00:03:53.296 LIB libspdk_event_iscsi.a 00:03:53.296 SO libspdk_event_vhost_scsi.so.3.0 00:03:53.296 SO libspdk_event_iscsi.so.6.0 00:03:53.296 SYMLINK libspdk_event_vhost_scsi.so 00:03:53.296 SYMLINK libspdk_event_iscsi.so 00:03:53.555 SO libspdk.so.6.0 00:03:53.555 SYMLINK libspdk.so 00:03:53.830 CC app/spdk_lspci/spdk_lspci.o 00:03:53.830 CC app/trace_record/trace_record.o 00:03:53.830 CXX app/trace/trace.o 00:03:53.830 CC app/spdk_nvme_discover/discovery_aer.o 00:03:53.830 CC app/spdk_nvme_perf/perf.o 00:03:53.830 CC app/spdk_top/spdk_top.o 00:03:53.830 CC app/spdk_nvme_identify/identify.o 00:03:53.830 TEST_HEADER include/spdk/accel.h 00:03:53.830 TEST_HEADER include/spdk/assert.h 00:03:53.830 TEST_HEADER include/spdk/accel_module.h 00:03:53.830 TEST_HEADER include/spdk/barrier.h 00:03:53.830 TEST_HEADER include/spdk/bdev.h 00:03:53.830 TEST_HEADER include/spdk/base64.h 00:03:53.830 TEST_HEADER include/spdk/bdev_module.h 00:03:53.830 TEST_HEADER include/spdk/bdev_zone.h 00:03:53.830 TEST_HEADER include/spdk/bit_array.h 00:03:53.830 TEST_HEADER include/spdk/bit_pool.h 00:03:53.830 TEST_HEADER include/spdk/blob_bdev.h 00:03:53.830 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:53.830 TEST_HEADER include/spdk/blobfs.h 00:03:53.830 CC test/rpc_client/rpc_client_test.o 00:03:53.830 TEST_HEADER include/spdk/conf.h 00:03:53.830 TEST_HEADER include/spdk/blob.h 00:03:53.830 CC app/iscsi_tgt/iscsi_tgt.o 00:03:53.830 TEST_HEADER include/spdk/cpuset.h 00:03:53.830 TEST_HEADER include/spdk/crc16.h 00:03:53.830 TEST_HEADER include/spdk/config.h 00:03:53.830 TEST_HEADER include/spdk/crc64.h 00:03:53.830 TEST_HEADER include/spdk/crc32.h 00:03:53.830 TEST_HEADER include/spdk/dif.h 00:03:53.830 CC app/spdk_dd/spdk_dd.o 00:03:53.830 TEST_HEADER include/spdk/endian.h 00:03:53.830 TEST_HEADER include/spdk/dma.h 00:03:53.830 TEST_HEADER include/spdk/env_dpdk.h 00:03:53.830 TEST_HEADER include/spdk/env.h 00:03:53.830 TEST_HEADER include/spdk/event.h 00:03:53.830 TEST_HEADER include/spdk/fd.h 00:03:53.830 TEST_HEADER include/spdk/fd_group.h 00:03:53.830 TEST_HEADER include/spdk/fsdev.h 00:03:53.830 TEST_HEADER include/spdk/file.h 00:03:53.830 CC app/nvmf_tgt/nvmf_main.o 00:03:53.830 TEST_HEADER include/spdk/fsdev_module.h 00:03:53.830 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:53.830 TEST_HEADER include/spdk/ftl.h 00:03:53.830 TEST_HEADER include/spdk/gpt_spec.h 00:03:53.830 TEST_HEADER include/spdk/histogram_data.h 00:03:53.830 TEST_HEADER include/spdk/hexlify.h 00:03:53.830 TEST_HEADER include/spdk/idxd_spec.h 00:03:53.830 TEST_HEADER include/spdk/idxd.h 00:03:53.830 TEST_HEADER include/spdk/init.h 00:03:53.830 TEST_HEADER include/spdk/ioat.h 00:03:53.830 TEST_HEADER include/spdk/iscsi_spec.h 00:03:53.830 TEST_HEADER include/spdk/json.h 00:03:53.830 TEST_HEADER include/spdk/ioat_spec.h 00:03:53.830 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:53.830 TEST_HEADER include/spdk/jsonrpc.h 00:03:53.830 TEST_HEADER include/spdk/keyring.h 00:03:53.830 TEST_HEADER include/spdk/keyring_module.h 00:03:53.830 TEST_HEADER include/spdk/likely.h 00:03:53.830 TEST_HEADER include/spdk/log.h 00:03:53.830 TEST_HEADER include/spdk/memory.h 00:03:53.830 TEST_HEADER include/spdk/lvol.h 00:03:53.830 TEST_HEADER include/spdk/md5.h 00:03:53.830 TEST_HEADER include/spdk/nbd.h 00:03:53.830 TEST_HEADER include/spdk/mmio.h 00:03:53.830 TEST_HEADER include/spdk/notify.h 00:03:53.830 TEST_HEADER include/spdk/net.h 00:03:53.830 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:53.830 TEST_HEADER include/spdk/nvme.h 00:03:53.830 TEST_HEADER include/spdk/nvme_intel.h 00:03:53.830 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:53.830 TEST_HEADER include/spdk/nvme_zns.h 00:03:53.830 TEST_HEADER include/spdk/nvme_spec.h 00:03:53.830 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:53.830 CC app/spdk_tgt/spdk_tgt.o 00:03:53.830 TEST_HEADER include/spdk/nvmf.h 00:03:53.830 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:53.830 TEST_HEADER include/spdk/nvmf_spec.h 00:03:53.830 TEST_HEADER include/spdk/nvmf_transport.h 00:03:53.830 TEST_HEADER include/spdk/opal_spec.h 00:03:53.830 TEST_HEADER include/spdk/pci_ids.h 00:03:53.830 TEST_HEADER include/spdk/queue.h 00:03:53.830 TEST_HEADER include/spdk/reduce.h 00:03:53.830 TEST_HEADER include/spdk/opal.h 00:03:53.830 TEST_HEADER include/spdk/pipe.h 00:03:53.830 TEST_HEADER include/spdk/rpc.h 00:03:53.830 TEST_HEADER include/spdk/scsi.h 00:03:53.830 TEST_HEADER include/spdk/scsi_spec.h 00:03:53.830 TEST_HEADER include/spdk/scheduler.h 00:03:53.830 TEST_HEADER include/spdk/stdinc.h 00:03:53.830 TEST_HEADER include/spdk/sock.h 00:03:53.830 TEST_HEADER include/spdk/string.h 00:03:53.830 TEST_HEADER include/spdk/thread.h 00:03:53.830 TEST_HEADER include/spdk/trace.h 00:03:53.830 TEST_HEADER include/spdk/trace_parser.h 00:03:53.830 TEST_HEADER include/spdk/tree.h 00:03:53.830 TEST_HEADER include/spdk/ublk.h 00:03:53.830 TEST_HEADER include/spdk/uuid.h 00:03:53.830 TEST_HEADER include/spdk/util.h 00:03:53.830 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:53.830 TEST_HEADER include/spdk/version.h 00:03:53.830 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:53.830 TEST_HEADER include/spdk/vhost.h 00:03:53.830 TEST_HEADER include/spdk/xor.h 00:03:53.830 TEST_HEADER include/spdk/vmd.h 00:03:53.830 TEST_HEADER include/spdk/zipf.h 00:03:53.830 CXX test/cpp_headers/accel.o 00:03:53.830 CXX test/cpp_headers/accel_module.o 00:03:53.830 CXX test/cpp_headers/assert.o 00:03:53.830 CXX test/cpp_headers/barrier.o 00:03:53.830 CXX test/cpp_headers/bdev_module.o 00:03:53.830 CXX test/cpp_headers/bdev_zone.o 00:03:53.830 CXX test/cpp_headers/bdev.o 00:03:53.830 CXX test/cpp_headers/base64.o 00:03:53.830 CXX test/cpp_headers/bit_pool.o 00:03:53.830 CXX test/cpp_headers/bit_array.o 00:03:53.830 CXX test/cpp_headers/blob_bdev.o 00:03:53.830 CXX test/cpp_headers/blobfs_bdev.o 00:03:53.830 CXX test/cpp_headers/blobfs.o 00:03:53.830 CXX test/cpp_headers/conf.o 00:03:53.830 CXX test/cpp_headers/blob.o 00:03:53.830 CXX test/cpp_headers/config.o 00:03:53.830 CXX test/cpp_headers/cpuset.o 00:03:53.830 CXX test/cpp_headers/crc32.o 00:03:53.830 CXX test/cpp_headers/crc64.o 00:03:53.830 CXX test/cpp_headers/crc16.o 00:03:53.830 CXX test/cpp_headers/endian.o 00:03:53.830 CXX test/cpp_headers/dma.o 00:03:53.830 CXX test/cpp_headers/env_dpdk.o 00:03:53.830 CXX test/cpp_headers/event.o 00:03:53.830 CXX test/cpp_headers/dif.o 00:03:53.830 CXX test/cpp_headers/fd_group.o 00:03:53.830 CXX test/cpp_headers/env.o 00:03:53.830 CXX test/cpp_headers/file.o 00:03:53.830 CXX test/cpp_headers/fsdev.o 00:03:53.830 CXX test/cpp_headers/fd.o 00:03:53.830 CXX test/cpp_headers/fsdev_module.o 00:03:53.830 CXX test/cpp_headers/gpt_spec.o 00:03:53.830 CXX test/cpp_headers/fuse_dispatcher.o 00:03:53.830 CXX test/cpp_headers/ftl.o 00:03:53.830 CXX test/cpp_headers/idxd.o 00:03:53.830 CXX test/cpp_headers/hexlify.o 00:03:53.830 CXX test/cpp_headers/idxd_spec.o 00:03:53.830 CXX test/cpp_headers/init.o 00:03:53.830 CXX test/cpp_headers/ioat.o 00:03:53.830 CXX test/cpp_headers/iscsi_spec.o 00:03:53.830 CXX test/cpp_headers/histogram_data.o 00:03:53.830 CXX test/cpp_headers/jsonrpc.o 00:03:53.830 CXX test/cpp_headers/ioat_spec.o 00:03:53.830 CXX test/cpp_headers/json.o 00:03:53.830 CXX test/cpp_headers/keyring.o 00:03:53.831 CXX test/cpp_headers/keyring_module.o 00:03:53.831 CXX test/cpp_headers/log.o 00:03:53.831 CXX test/cpp_headers/md5.o 00:03:53.831 CXX test/cpp_headers/lvol.o 00:03:53.831 CXX test/cpp_headers/likely.o 00:03:53.831 CXX test/cpp_headers/memory.o 00:03:53.831 CXX test/cpp_headers/mmio.o 00:03:53.831 CXX test/cpp_headers/net.o 00:03:53.831 CXX test/cpp_headers/notify.o 00:03:53.831 CXX test/cpp_headers/nbd.o 00:03:53.831 CXX test/cpp_headers/nvme.o 00:03:53.831 CXX test/cpp_headers/nvme_intel.o 00:03:53.831 CXX test/cpp_headers/nvme_ocssd.o 00:03:53.831 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:53.831 CXX test/cpp_headers/nvme_spec.o 00:03:53.831 CXX test/cpp_headers/nvme_zns.o 00:03:53.831 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.831 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:53.831 CXX test/cpp_headers/nvmf.o 00:03:53.831 CXX test/cpp_headers/nvmf_spec.o 00:03:53.831 CXX test/cpp_headers/nvmf_transport.o 00:03:53.831 CC app/fio/nvme/fio_plugin.o 00:03:53.831 CXX test/cpp_headers/opal.o 00:03:53.831 CC examples/util/zipf/zipf.o 00:03:53.831 CC app/fio/bdev/fio_plugin.o 00:03:54.102 CC test/app/stub/stub.o 00:03:54.102 CC examples/ioat/perf/perf.o 00:03:54.102 CC test/env/memory/memory_ut.o 00:03:54.102 LINK spdk_lspci 00:03:54.102 CC test/env/pci/pci_ut.o 00:03:54.102 CC test/app/histogram_perf/histogram_perf.o 00:03:54.102 CC test/thread/poller_perf/poller_perf.o 00:03:54.102 CC test/app/jsoncat/jsoncat.o 00:03:54.102 CC examples/ioat/verify/verify.o 00:03:54.102 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:54.102 CC test/app/bdev_svc/bdev_svc.o 00:03:54.102 CC test/env/vtophys/vtophys.o 00:03:54.102 CC test/dma/test_dma/test_dma.o 00:03:54.102 LINK nvmf_tgt 00:03:54.368 LINK rpc_client_test 00:03:54.368 LINK spdk_nvme_discover 00:03:54.368 LINK iscsi_tgt 00:03:54.368 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:54.368 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.368 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:54.368 LINK spdk_trace_record 00:03:54.368 LINK zipf 00:03:54.368 LINK interrupt_tgt 00:03:54.368 CXX test/cpp_headers/opal_spec.o 00:03:54.627 CXX test/cpp_headers/pci_ids.o 00:03:54.627 CXX test/cpp_headers/pipe.o 00:03:54.627 LINK histogram_perf 00:03:54.627 LINK stub 00:03:54.627 CXX test/cpp_headers/queue.o 00:03:54.627 CXX test/cpp_headers/reduce.o 00:03:54.627 CXX test/cpp_headers/rpc.o 00:03:54.627 CXX test/cpp_headers/scheduler.o 00:03:54.627 LINK poller_perf 00:03:54.627 CXX test/cpp_headers/scsi.o 00:03:54.627 CXX test/cpp_headers/scsi_spec.o 00:03:54.627 CXX test/cpp_headers/sock.o 00:03:54.627 CXX test/cpp_headers/stdinc.o 00:03:54.627 CXX test/cpp_headers/string.o 00:03:54.627 CXX test/cpp_headers/trace.o 00:03:54.627 CXX test/cpp_headers/thread.o 00:03:54.627 CXX test/cpp_headers/trace_parser.o 00:03:54.627 CXX test/cpp_headers/tree.o 00:03:54.627 CXX test/cpp_headers/ublk.o 00:03:54.627 CXX test/cpp_headers/util.o 00:03:54.627 CXX test/cpp_headers/uuid.o 00:03:54.627 CXX test/cpp_headers/version.o 00:03:54.627 CXX test/cpp_headers/vfio_user_pci.o 00:03:54.627 CXX test/cpp_headers/vfio_user_spec.o 00:03:54.627 CXX test/cpp_headers/vhost.o 00:03:54.627 CXX test/cpp_headers/vmd.o 00:03:54.627 CXX test/cpp_headers/xor.o 00:03:54.627 CXX test/cpp_headers/zipf.o 00:03:54.627 LINK bdev_svc 00:03:54.627 LINK ioat_perf 00:03:54.627 LINK spdk_tgt 00:03:54.627 LINK jsoncat 00:03:54.627 LINK env_dpdk_post_init 00:03:54.627 LINK vtophys 00:03:54.627 LINK verify 00:03:54.627 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:54.627 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:54.886 LINK spdk_dd 00:03:54.886 LINK spdk_trace 00:03:54.886 LINK spdk_nvme 00:03:54.886 LINK spdk_bdev 00:03:54.886 LINK pci_ut 00:03:54.886 CC test/event/reactor/reactor.o 00:03:54.886 CC test/event/event_perf/event_perf.o 00:03:54.886 CC examples/sock/hello_world/hello_sock.o 00:03:54.886 CC examples/vmd/lsvmd/lsvmd.o 00:03:54.886 CC examples/vmd/led/led.o 00:03:54.886 CC test/event/reactor_perf/reactor_perf.o 00:03:54.886 CC examples/idxd/perf/perf.o 00:03:54.886 CC test/event/app_repeat/app_repeat.o 00:03:55.144 CC test/event/scheduler/scheduler.o 00:03:55.144 CC examples/thread/thread/thread_ex.o 00:03:55.144 LINK spdk_top 00:03:55.144 LINK nvme_fuzz 00:03:55.144 LINK spdk_nvme_perf 00:03:55.144 LINK reactor 00:03:55.144 LINK lsvmd 00:03:55.144 LINK reactor_perf 00:03:55.144 LINK test_dma 00:03:55.144 LINK event_perf 00:03:55.144 LINK vhost_fuzz 00:03:55.144 LINK led 00:03:55.144 LINK app_repeat 00:03:55.144 LINK spdk_nvme_identify 00:03:55.144 CC app/vhost/vhost.o 00:03:55.144 LINK mem_callbacks 00:03:55.144 LINK hello_sock 00:03:55.403 LINK scheduler 00:03:55.403 LINK idxd_perf 00:03:55.403 LINK thread 00:03:55.403 LINK vhost 00:03:55.403 LINK memory_ut 00:03:55.661 CC test/nvme/reset/reset.o 00:03:55.661 CC test/nvme/connect_stress/connect_stress.o 00:03:55.661 CC test/nvme/simple_copy/simple_copy.o 00:03:55.661 CC test/nvme/startup/startup.o 00:03:55.661 CC test/nvme/overhead/overhead.o 00:03:55.661 CC test/nvme/fdp/fdp.o 00:03:55.661 CC test/nvme/reserve/reserve.o 00:03:55.661 CC test/nvme/e2edp/nvme_dp.o 00:03:55.661 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:55.661 CC test/nvme/fused_ordering/fused_ordering.o 00:03:55.661 CC test/nvme/compliance/nvme_compliance.o 00:03:55.661 CC test/nvme/boot_partition/boot_partition.o 00:03:55.661 CC test/nvme/cuse/cuse.o 00:03:55.661 CC test/nvme/aer/aer.o 00:03:55.661 CC test/nvme/err_injection/err_injection.o 00:03:55.661 CC test/nvme/sgl/sgl.o 00:03:55.661 CC test/accel/dif/dif.o 00:03:55.661 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:55.661 CC test/blobfs/mkfs/mkfs.o 00:03:55.661 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:55.661 CC examples/nvme/abort/abort.o 00:03:55.661 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:55.661 CC examples/nvme/hello_world/hello_world.o 00:03:55.661 CC examples/nvme/hotplug/hotplug.o 00:03:55.661 CC examples/nvme/arbitration/arbitration.o 00:03:55.661 CC examples/nvme/reconnect/reconnect.o 00:03:55.661 CC test/lvol/esnap/esnap.o 00:03:55.661 CC examples/accel/perf/accel_perf.o 00:03:55.919 LINK connect_stress 00:03:55.919 LINK boot_partition 00:03:55.919 CC examples/blob/cli/blobcli.o 00:03:55.919 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:55.919 LINK startup 00:03:55.919 CC examples/blob/hello_world/hello_blob.o 00:03:55.919 LINK reserve 00:03:55.919 LINK doorbell_aers 00:03:55.919 LINK err_injection 00:03:55.919 LINK simple_copy 00:03:55.919 LINK fused_ordering 00:03:55.919 LINK cmb_copy 00:03:55.919 LINK reset 00:03:55.919 LINK nvme_dp 00:03:55.919 LINK pmr_persistence 00:03:55.919 LINK mkfs 00:03:55.919 LINK aer 00:03:55.919 LINK overhead 00:03:55.919 LINK sgl 00:03:55.919 LINK hotplug 00:03:55.919 LINK hello_world 00:03:55.919 LINK fdp 00:03:55.919 LINK nvme_compliance 00:03:55.919 LINK arbitration 00:03:55.919 LINK iscsi_fuzz 00:03:55.919 LINK abort 00:03:56.177 LINK reconnect 00:03:56.177 LINK hello_blob 00:03:56.177 LINK hello_fsdev 00:03:56.177 LINK nvme_manage 00:03:56.177 LINK accel_perf 00:03:56.177 LINK dif 00:03:56.177 LINK blobcli 00:03:56.744 CC examples/bdev/hello_world/hello_bdev.o 00:03:56.744 LINK cuse 00:03:56.744 CC examples/bdev/bdevperf/bdevperf.o 00:03:56.744 CC test/bdev/bdevio/bdevio.o 00:03:57.002 LINK hello_bdev 00:03:57.002 LINK bdevio 00:03:57.260 LINK bdevperf 00:03:57.828 CC examples/nvmf/nvmf/nvmf.o 00:03:58.087 LINK nvmf 00:03:59.464 LINK esnap 00:03:59.464 00:03:59.464 real 0m53.371s 00:03:59.464 user 6m44.608s 00:03:59.464 sys 2m46.236s 00:03:59.464 05:32:33 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:59.464 05:32:33 make -- common/autotest_common.sh@10 -- $ set +x 00:03:59.464 ************************************ 00:03:59.464 END TEST make 00:03:59.464 ************************************ 00:03:59.464 05:32:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.464 05:32:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.464 05:32:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.464 05:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.464 05:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.464 05:32:33 -- pm/common@44 -- $ pid=3053973 00:03:59.464 05:32:33 -- pm/common@50 -- $ kill -TERM 3053973 00:03:59.464 05:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.464 05:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.464 05:32:33 -- pm/common@44 -- $ pid=3053975 00:03:59.464 05:32:33 -- pm/common@50 -- $ kill -TERM 3053975 00:03:59.464 05:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.464 05:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:59.464 05:32:33 -- pm/common@44 -- $ pid=3053977 00:03:59.464 05:32:33 -- pm/common@50 -- $ kill -TERM 3053977 00:03:59.464 05:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.464 05:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:59.464 05:32:33 -- pm/common@44 -- $ pid=3054001 00:03:59.464 05:32:33 -- pm/common@50 -- $ sudo -E kill -TERM 3054001 00:03:59.464 05:32:33 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:59.464 05:32:33 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:59.464 05:32:33 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:59.723 05:32:33 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:59.723 05:32:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.723 05:32:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.723 05:32:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.723 05:32:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.723 05:32:33 -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.723 05:32:33 -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.723 05:32:33 -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.723 05:32:33 -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.723 05:32:33 -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.723 05:32:33 -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.723 05:32:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.723 05:32:33 -- scripts/common.sh@344 -- # case "$op" in 00:03:59.723 05:32:33 -- scripts/common.sh@345 -- # : 1 00:03:59.723 05:32:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.723 05:32:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.723 05:32:33 -- scripts/common.sh@365 -- # decimal 1 00:03:59.723 05:32:33 -- scripts/common.sh@353 -- # local d=1 00:03:59.723 05:32:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.723 05:32:33 -- scripts/common.sh@355 -- # echo 1 00:03:59.723 05:32:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.723 05:32:33 -- scripts/common.sh@366 -- # decimal 2 00:03:59.723 05:32:33 -- scripts/common.sh@353 -- # local d=2 00:03:59.723 05:32:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.723 05:32:33 -- scripts/common.sh@355 -- # echo 2 00:03:59.723 05:32:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.723 05:32:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.723 05:32:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.723 05:32:33 -- scripts/common.sh@368 -- # return 0 00:03:59.723 05:32:33 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.723 05:32:33 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.723 --rc genhtml_branch_coverage=1 00:03:59.723 --rc genhtml_function_coverage=1 00:03:59.723 --rc genhtml_legend=1 00:03:59.723 --rc geninfo_all_blocks=1 00:03:59.723 --rc geninfo_unexecuted_blocks=1 00:03:59.723 00:03:59.723 ' 00:03:59.723 05:32:33 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.723 --rc genhtml_branch_coverage=1 00:03:59.723 --rc genhtml_function_coverage=1 00:03:59.723 --rc genhtml_legend=1 00:03:59.723 --rc geninfo_all_blocks=1 00:03:59.723 --rc geninfo_unexecuted_blocks=1 00:03:59.723 00:03:59.723 ' 00:03:59.723 05:32:33 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.723 --rc genhtml_branch_coverage=1 00:03:59.723 --rc genhtml_function_coverage=1 00:03:59.723 --rc genhtml_legend=1 00:03:59.723 --rc geninfo_all_blocks=1 00:03:59.723 --rc geninfo_unexecuted_blocks=1 00:03:59.723 00:03:59.723 ' 00:03:59.723 05:32:33 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.723 --rc genhtml_branch_coverage=1 00:03:59.723 --rc genhtml_function_coverage=1 00:03:59.723 --rc genhtml_legend=1 00:03:59.723 --rc geninfo_all_blocks=1 00:03:59.723 --rc geninfo_unexecuted_blocks=1 00:03:59.723 00:03:59.723 ' 00:03:59.723 05:32:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:59.723 05:32:33 -- nvmf/common.sh@7 -- # uname -s 00:03:59.723 05:32:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.723 05:32:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.723 05:32:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.723 05:32:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.723 05:32:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.723 05:32:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.723 05:32:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.723 05:32:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.723 05:32:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.723 05:32:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.723 05:32:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:59.723 05:32:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:59.723 05:32:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.723 05:32:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.723 05:32:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:59.723 05:32:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.723 05:32:33 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:59.723 05:32:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:59.723 05:32:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.723 05:32:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.723 05:32:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.723 05:32:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.723 05:32:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.723 05:32:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.723 05:32:33 -- paths/export.sh@5 -- # export PATH 00:03:59.723 05:32:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.723 05:32:33 -- nvmf/common.sh@51 -- # : 0 00:03:59.723 05:32:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:59.723 05:32:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:59.723 05:32:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.723 05:32:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.723 05:32:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.723 05:32:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:59.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:59.723 05:32:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:59.723 05:32:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:59.723 05:32:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:59.723 05:32:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.723 05:32:33 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.723 05:32:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.723 05:32:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.723 05:32:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.723 05:32:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.723 05:32:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.723 05:32:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.723 05:32:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.723 05:32:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.723 05:32:33 -- spdk/autotest.sh@48 -- # udevadm_pid=3131915 00:03:59.723 05:32:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.723 05:32:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.723 05:32:33 -- pm/common@17 -- # local monitor 00:03:59.723 05:32:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.724 05:32:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.724 05:32:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.724 05:32:33 -- pm/common@21 -- # date +%s 00:03:59.724 05:32:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.724 05:32:33 -- pm/common@21 -- # date +%s 00:03:59.724 05:32:33 -- pm/common@25 -- # sleep 1 00:03:59.724 05:32:33 -- pm/common@21 -- # date +%s 00:03:59.724 05:32:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734323553 00:03:59.724 05:32:33 -- pm/common@21 -- # date +%s 00:03:59.724 05:32:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734323553 00:03:59.724 05:32:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734323553 00:03:59.724 05:32:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734323553 00:03:59.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734323553_collect-cpu-load.pm.log 00:03:59.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734323553_collect-vmstat.pm.log 00:03:59.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734323553_collect-cpu-temp.pm.log 00:03:59.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734323553_collect-bmc-pm.bmc.pm.log 00:04:00.660 05:32:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.660 05:32:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.660 05:32:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.660 05:32:34 -- common/autotest_common.sh@10 -- # set +x 00:04:00.660 05:32:34 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.660 05:32:34 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:00.660 05:32:34 -- common/autotest_common.sh@10 -- # set +x 00:04:00.660 05:32:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:00.660 05:32:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.660 05:32:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.660 05:32:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:00.660 05:32:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.660 05:32:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.661 05:32:34 -- common/autotest_common.sh@1455 -- # uname 00:04:00.661 05:32:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:00.661 05:32:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.661 05:32:34 -- common/autotest_common.sh@1475 -- # uname 00:04:00.919 05:32:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:00.919 05:32:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:00.919 05:32:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:00.919 lcov: LCOV version 1.15 00:04:00.919 05:32:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:19.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:19.010 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:25.574 05:32:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:25.574 05:32:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.574 05:32:58 -- common/autotest_common.sh@10 -- # set +x 00:04:25.574 05:32:58 -- spdk/autotest.sh@78 -- # rm -f 00:04:25.574 05:32:58 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.107 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:28.107 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:28.107 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:28.107 05:33:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:28.107 05:33:01 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:28.107 05:33:01 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:28.107 05:33:01 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:28.107 05:33:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:28.107 05:33:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:28.107 05:33:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:28.107 05:33:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.107 05:33:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:28.107 05:33:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:28.107 05:33:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.107 05:33:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:28.107 05:33:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:28.107 05:33:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:28.107 05:33:01 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:28.107 No valid GPT data, bailing 00:04:28.107 05:33:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.107 05:33:01 -- scripts/common.sh@394 -- # pt= 00:04:28.108 05:33:01 -- scripts/common.sh@395 -- # return 1 00:04:28.108 05:33:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:28.108 1+0 records in 00:04:28.108 1+0 records out 00:04:28.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430432 s, 244 MB/s 00:04:28.108 05:33:01 -- spdk/autotest.sh@105 -- # sync 00:04:28.108 05:33:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:28.108 05:33:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:28.108 05:33:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:33.376 05:33:06 -- spdk/autotest.sh@111 -- # uname -s 00:04:33.376 05:33:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:33.376 05:33:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:33.376 05:33:06 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:35.277 Hugepages 00:04:35.277 node hugesize free / total 00:04:35.277 node0 1048576kB 0 / 0 00:04:35.277 node0 2048kB 0 / 0 00:04:35.277 node1 1048576kB 0 / 0 00:04:35.277 node1 2048kB 0 / 0 00:04:35.277 00:04:35.277 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.277 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:35.277 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:35.277 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:35.277 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:35.277 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:35.277 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:35.277 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:35.277 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:35.535 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:35.535 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:35.535 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:35.535 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:35.535 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:35.535 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:35.535 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:35.535 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:35.535 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:35.535 05:33:09 -- spdk/autotest.sh@117 -- # uname -s 00:04:35.535 05:33:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:35.535 05:33:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:35.535 05:33:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.065 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.065 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.065 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.323 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.258 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.258 05:33:13 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:40.196 05:33:14 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:40.196 05:33:14 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:40.196 05:33:14 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:40.196 05:33:14 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:40.196 05:33:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:40.196 05:33:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:40.196 05:33:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.196 05:33:14 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:40.196 05:33:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:40.454 05:33:14 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:40.454 05:33:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:40.454 05:33:14 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.990 Waiting for block devices as requested 00:04:42.990 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:42.990 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:42.990 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:42.990 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:42.990 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:43.249 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:43.249 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:43.249 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:43.508 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:43.508 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:43.508 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:43.508 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:43.767 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:43.767 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:43.767 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:44.026 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:44.026 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:44.026 05:33:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:44.026 05:33:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:44.026 05:33:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:44.026 05:33:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:44.026 05:33:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:44.026 05:33:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:44.026 05:33:17 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:44.026 05:33:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:44.026 05:33:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:44.026 05:33:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:44.026 05:33:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:44.026 05:33:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:44.026 05:33:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:44.026 05:33:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:44.026 05:33:17 -- common/autotest_common.sh@1541 -- # continue 00:04:44.026 05:33:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:44.026 05:33:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.026 05:33:17 -- common/autotest_common.sh@10 -- # set +x 00:04:44.026 05:33:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:44.026 05:33:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.026 05:33:17 -- common/autotest_common.sh@10 -- # set +x 00:04:44.026 05:33:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.356 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.356 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.615 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:47.874 05:33:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:47.874 05:33:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.874 05:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:47.874 05:33:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:47.874 05:33:21 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:47.874 05:33:21 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:47.874 05:33:21 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:47.874 05:33:21 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:47.874 05:33:21 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:47.874 05:33:21 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:47.874 05:33:21 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:47.874 05:33:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:47.874 05:33:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:47.874 05:33:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.874 05:33:21 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:47.874 05:33:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:47.874 05:33:21 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:47.874 05:33:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:47.874 05:33:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:47.874 05:33:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:47.874 05:33:21 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:47.874 05:33:21 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:47.874 05:33:21 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:47.874 05:33:21 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:47.874 05:33:21 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:47.874 05:33:21 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:47.874 05:33:21 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3146415 00:04:47.874 05:33:21 -- common/autotest_common.sh@1583 -- # waitforlisten 3146415 00:04:47.874 05:33:21 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.874 05:33:21 -- common/autotest_common.sh@831 -- # '[' -z 3146415 ']' 00:04:47.874 05:33:21 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.874 05:33:21 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.874 05:33:21 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.874 05:33:21 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.874 05:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.133 [2024-12-16 05:33:21.770059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:48.133 [2024-12-16 05:33:21.770105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146415 ] 00:04:48.133 [2024-12-16 05:33:21.825985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.133 [2024-12-16 05:33:21.865944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.392 05:33:22 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.392 05:33:22 -- common/autotest_common.sh@864 -- # return 0 00:04:48.392 05:33:22 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:48.392 05:33:22 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:48.392 05:33:22 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:51.678 nvme0n1 00:04:51.678 05:33:25 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:51.678 [2024-12-16 05:33:25.218881] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:51.678 [2024-12-16 05:33:25.218909] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:51.678 request: 00:04:51.678 { 00:04:51.678 "nvme_ctrlr_name": "nvme0", 00:04:51.678 "password": "test", 00:04:51.678 "method": "bdev_nvme_opal_revert", 00:04:51.678 "req_id": 1 00:04:51.678 } 00:04:51.678 Got JSON-RPC error response 00:04:51.678 response: 00:04:51.678 { 00:04:51.678 "code": -32603, 00:04:51.678 "message": "Internal error" 00:04:51.678 } 00:04:51.678 05:33:25 -- common/autotest_common.sh@1589 -- # true 00:04:51.678 05:33:25 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:51.678 05:33:25 -- common/autotest_common.sh@1593 -- # killprocess 3146415 00:04:51.678 05:33:25 -- common/autotest_common.sh@950 -- # '[' -z 3146415 ']' 00:04:51.678 05:33:25 -- common/autotest_common.sh@954 -- # kill -0 3146415 00:04:51.678 05:33:25 -- common/autotest_common.sh@955 -- # uname 00:04:51.678 05:33:25 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.678 05:33:25 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3146415 00:04:51.678 05:33:25 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.678 05:33:25 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.678 05:33:25 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3146415' 00:04:51.678 killing process with pid 3146415 00:04:51.678 05:33:25 -- common/autotest_common.sh@969 -- # kill 3146415 00:04:51.678 05:33:25 -- common/autotest_common.sh@974 -- # wait 3146415 00:04:53.055 05:33:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:53.055 05:33:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:53.055 05:33:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:53.055 05:33:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:53.055 05:33:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:53.055 05:33:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.055 05:33:26 -- common/autotest_common.sh@10 -- # set +x 00:04:53.055 05:33:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:53.055 05:33:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.055 05:33:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.055 05:33:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.055 05:33:26 -- common/autotest_common.sh@10 -- # set +x 00:04:53.055 ************************************ 00:04:53.055 START TEST env 00:04:53.055 ************************************ 00:04:53.055 05:33:26 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.314 * Looking for test storage... 00:04:53.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:53.314 05:33:26 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.314 05:33:26 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.314 05:33:26 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.314 05:33:27 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.314 05:33:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.314 05:33:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.315 05:33:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.315 05:33:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.315 05:33:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.315 05:33:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.315 05:33:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.315 05:33:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.315 05:33:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.315 05:33:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.315 05:33:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.315 05:33:27 env -- scripts/common.sh@344 -- # case "$op" in 00:04:53.315 05:33:27 env -- scripts/common.sh@345 -- # : 1 00:04:53.315 05:33:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.315 05:33:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.315 05:33:27 env -- scripts/common.sh@365 -- # decimal 1 00:04:53.315 05:33:27 env -- scripts/common.sh@353 -- # local d=1 00:04:53.315 05:33:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.315 05:33:27 env -- scripts/common.sh@355 -- # echo 1 00:04:53.315 05:33:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.315 05:33:27 env -- scripts/common.sh@366 -- # decimal 2 00:04:53.315 05:33:27 env -- scripts/common.sh@353 -- # local d=2 00:04:53.315 05:33:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.315 05:33:27 env -- scripts/common.sh@355 -- # echo 2 00:04:53.315 05:33:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.315 05:33:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.315 05:33:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.315 05:33:27 env -- scripts/common.sh@368 -- # return 0 00:04:53.315 05:33:27 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.315 05:33:27 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.315 --rc genhtml_branch_coverage=1 00:04:53.315 --rc genhtml_function_coverage=1 00:04:53.315 --rc genhtml_legend=1 00:04:53.315 --rc geninfo_all_blocks=1 00:04:53.315 --rc geninfo_unexecuted_blocks=1 00:04:53.315 00:04:53.315 ' 00:04:53.315 05:33:27 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.315 --rc genhtml_branch_coverage=1 00:04:53.315 --rc genhtml_function_coverage=1 00:04:53.315 --rc genhtml_legend=1 00:04:53.315 --rc geninfo_all_blocks=1 00:04:53.315 --rc geninfo_unexecuted_blocks=1 00:04:53.315 00:04:53.315 ' 00:04:53.315 05:33:27 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.315 --rc genhtml_branch_coverage=1 00:04:53.315 --rc genhtml_function_coverage=1 00:04:53.315 --rc genhtml_legend=1 00:04:53.315 --rc geninfo_all_blocks=1 00:04:53.315 --rc geninfo_unexecuted_blocks=1 00:04:53.315 00:04:53.315 ' 00:04:53.315 05:33:27 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.315 --rc genhtml_branch_coverage=1 00:04:53.315 --rc genhtml_function_coverage=1 00:04:53.315 --rc genhtml_legend=1 00:04:53.315 --rc geninfo_all_blocks=1 00:04:53.315 --rc geninfo_unexecuted_blocks=1 00:04:53.315 00:04:53.315 ' 00:04:53.315 05:33:27 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.315 05:33:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.315 05:33:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.315 05:33:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.315 ************************************ 00:04:53.315 START TEST env_memory 00:04:53.315 ************************************ 00:04:53.315 05:33:27 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.315 00:04:53.315 00:04:53.315 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.315 http://cunit.sourceforge.net/ 00:04:53.315 00:04:53.315 00:04:53.315 Suite: memory 00:04:53.315 Test: alloc and free memory map ...[2024-12-16 05:33:27.141552] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:53.315 passed 00:04:53.315 Test: mem map translation ...[2024-12-16 05:33:27.159992] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:53.315 [2024-12-16 05:33:27.160010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:53.315 [2024-12-16 05:33:27.160042] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:53.315 [2024-12-16 05:33:27.160048] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.575 passed 00:04:53.575 Test: mem map registration ...[2024-12-16 05:33:27.195603] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:53.575 [2024-12-16 05:33:27.195628] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:53.575 passed 00:04:53.575 Test: mem map adjacent registrations ...passed 00:04:53.575 00:04:53.575 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.575 suites 1 1 n/a 0 0 00:04:53.575 tests 4 4 4 0 0 00:04:53.575 asserts 152 152 152 0 n/a 00:04:53.575 00:04:53.575 Elapsed time = 0.130 seconds 00:04:53.575 00:04:53.575 real 0m0.139s 00:04:53.575 user 0m0.130s 00:04:53.575 sys 0m0.008s 00:04:53.575 05:33:27 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.575 05:33:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:53.575 ************************************ 00:04:53.575 END TEST env_memory 00:04:53.575 ************************************ 00:04:53.575 05:33:27 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.575 05:33:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.575 05:33:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.575 05:33:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.575 ************************************ 00:04:53.575 START TEST env_vtophys 00:04:53.575 ************************************ 00:04:53.575 05:33:27 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.575 EAL: lib.eal log level changed from notice to debug 00:04:53.575 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.575 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.575 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.575 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.575 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.575 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.575 EAL: Detected lcore 6 as core 6 on socket 0 00:04:53.575 EAL: Detected lcore 7 as core 8 on socket 0 00:04:53.575 EAL: Detected lcore 8 as core 9 on socket 0 00:04:53.575 EAL: Detected lcore 9 as core 10 on socket 0 00:04:53.575 EAL: Detected lcore 10 as core 11 on socket 0 00:04:53.575 EAL: Detected lcore 11 as core 12 on socket 0 00:04:53.575 EAL: Detected lcore 12 as core 13 on socket 0 00:04:53.575 EAL: Detected lcore 13 as core 16 on socket 0 00:04:53.575 EAL: Detected lcore 14 as core 17 on socket 0 00:04:53.575 EAL: Detected lcore 15 as core 18 on socket 0 00:04:53.575 EAL: Detected lcore 16 as core 19 on socket 0 00:04:53.575 EAL: Detected lcore 17 as core 20 on socket 0 00:04:53.575 EAL: Detected lcore 18 as core 21 on socket 0 00:04:53.575 EAL: Detected lcore 19 as core 25 on socket 0 00:04:53.575 EAL: Detected lcore 20 as core 26 on socket 0 00:04:53.575 EAL: Detected lcore 21 as core 27 on socket 0 00:04:53.575 EAL: Detected lcore 22 as core 28 on socket 0 00:04:53.575 EAL: Detected lcore 23 as core 29 on socket 0 00:04:53.575 EAL: Detected lcore 24 as core 0 on socket 1 00:04:53.575 EAL: Detected lcore 25 as core 1 on socket 1 00:04:53.575 EAL: Detected lcore 26 as core 2 on socket 1 00:04:53.575 EAL: Detected lcore 27 as core 3 on socket 1 00:04:53.575 EAL: Detected lcore 28 as core 4 on socket 1 00:04:53.575 EAL: Detected lcore 29 as core 5 on socket 1 00:04:53.575 EAL: Detected lcore 30 as core 6 on socket 1 00:04:53.575 EAL: Detected lcore 31 as core 8 on socket 1 00:04:53.575 EAL: Detected lcore 32 as core 9 on socket 1 00:04:53.575 EAL: Detected lcore 33 as core 10 on socket 1 00:04:53.575 EAL: Detected lcore 34 as core 11 on socket 1 00:04:53.575 EAL: Detected lcore 35 as core 12 on socket 1 00:04:53.575 EAL: Detected lcore 36 as core 13 on socket 1 00:04:53.575 EAL: Detected lcore 37 as core 16 on socket 1 00:04:53.575 EAL: Detected lcore 38 as core 17 on socket 1 00:04:53.575 EAL: Detected lcore 39 as core 18 on socket 1 00:04:53.575 EAL: Detected lcore 40 as core 19 on socket 1 00:04:53.575 EAL: Detected lcore 41 as core 20 on socket 1 00:04:53.575 EAL: Detected lcore 42 as core 21 on socket 1 00:04:53.575 EAL: Detected lcore 43 as core 25 on socket 1 00:04:53.575 EAL: Detected lcore 44 as core 26 on socket 1 00:04:53.575 EAL: Detected lcore 45 as core 27 on socket 1 00:04:53.575 EAL: Detected lcore 46 as core 28 on socket 1 00:04:53.575 EAL: Detected lcore 47 as core 29 on socket 1 00:04:53.575 EAL: Detected lcore 48 as core 0 on socket 0 00:04:53.575 EAL: Detected lcore 49 as core 1 on socket 0 00:04:53.575 EAL: Detected lcore 50 as core 2 on socket 0 00:04:53.575 EAL: Detected lcore 51 as core 3 on socket 0 00:04:53.575 EAL: Detected lcore 52 as core 4 on socket 0 00:04:53.575 EAL: Detected lcore 53 as core 5 on socket 0 00:04:53.575 EAL: Detected lcore 54 as core 6 on socket 0 00:04:53.575 EAL: Detected lcore 55 as core 8 on socket 0 00:04:53.575 EAL: Detected lcore 56 as core 9 on socket 0 00:04:53.575 EAL: Detected lcore 57 as core 10 on socket 0 00:04:53.575 EAL: Detected lcore 58 as core 11 on socket 0 00:04:53.575 EAL: Detected lcore 59 as core 12 on socket 0 00:04:53.575 EAL: Detected lcore 60 as core 13 on socket 0 00:04:53.575 EAL: Detected lcore 61 as core 16 on socket 0 00:04:53.575 EAL: Detected lcore 62 as core 17 on socket 0 00:04:53.575 EAL: Detected lcore 63 as core 18 on socket 0 00:04:53.575 EAL: Detected lcore 64 as core 19 on socket 0 00:04:53.575 EAL: Detected lcore 65 as core 20 on socket 0 00:04:53.575 EAL: Detected lcore 66 as core 21 on socket 0 00:04:53.575 EAL: Detected lcore 67 as core 25 on socket 0 00:04:53.575 EAL: Detected lcore 68 as core 26 on socket 0 00:04:53.575 EAL: Detected lcore 69 as core 27 on socket 0 00:04:53.576 EAL: Detected lcore 70 as core 28 on socket 0 00:04:53.576 EAL: Detected lcore 71 as core 29 on socket 0 00:04:53.576 EAL: Detected lcore 72 as core 0 on socket 1 00:04:53.576 EAL: Detected lcore 73 as core 1 on socket 1 00:04:53.576 EAL: Detected lcore 74 as core 2 on socket 1 00:04:53.576 EAL: Detected lcore 75 as core 3 on socket 1 00:04:53.576 EAL: Detected lcore 76 as core 4 on socket 1 00:04:53.576 EAL: Detected lcore 77 as core 5 on socket 1 00:04:53.576 EAL: Detected lcore 78 as core 6 on socket 1 00:04:53.576 EAL: Detected lcore 79 as core 8 on socket 1 00:04:53.576 EAL: Detected lcore 80 as core 9 on socket 1 00:04:53.576 EAL: Detected lcore 81 as core 10 on socket 1 00:04:53.576 EAL: Detected lcore 82 as core 11 on socket 1 00:04:53.576 EAL: Detected lcore 83 as core 12 on socket 1 00:04:53.576 EAL: Detected lcore 84 as core 13 on socket 1 00:04:53.576 EAL: Detected lcore 85 as core 16 on socket 1 00:04:53.576 EAL: Detected lcore 86 as core 17 on socket 1 00:04:53.576 EAL: Detected lcore 87 as core 18 on socket 1 00:04:53.576 EAL: Detected lcore 88 as core 19 on socket 1 00:04:53.576 EAL: Detected lcore 89 as core 20 on socket 1 00:04:53.576 EAL: Detected lcore 90 as core 21 on socket 1 00:04:53.576 EAL: Detected lcore 91 as core 25 on socket 1 00:04:53.576 EAL: Detected lcore 92 as core 26 on socket 1 00:04:53.576 EAL: Detected lcore 93 as core 27 on socket 1 00:04:53.576 EAL: Detected lcore 94 as core 28 on socket 1 00:04:53.576 EAL: Detected lcore 95 as core 29 on socket 1 00:04:53.576 EAL: Maximum logical cores by configuration: 128 00:04:53.576 EAL: Detected CPU lcores: 96 00:04:53.576 EAL: Detected NUMA nodes: 2 00:04:53.576 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:53.576 EAL: Detected shared linkage of DPDK 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:53.576 EAL: Registered [vdev] bus. 00:04:53.576 EAL: bus.vdev log level changed from disabled to notice 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:53.576 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:53.576 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:53.576 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:53.576 EAL: No shared files mode enabled, IPC will be disabled 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Bus pci wants IOVA as 'DC' 00:04:53.576 EAL: Bus vdev wants IOVA as 'DC' 00:04:53.576 EAL: Buses did not request a specific IOVA mode. 00:04:53.576 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:53.576 EAL: Selected IOVA mode 'VA' 00:04:53.576 EAL: Probing VFIO support... 00:04:53.576 EAL: IOMMU type 1 (Type 1) is supported 00:04:53.576 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:53.576 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:53.576 EAL: VFIO support initialized 00:04:53.576 EAL: Ask a virtual area of 0x2e000 bytes 00:04:53.576 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:53.576 EAL: Setting up physically contiguous memory... 00:04:53.576 EAL: Setting maximum number of open files to 524288 00:04:53.576 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:53.576 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:53.576 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:53.576 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.576 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:53.576 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.576 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:53.576 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:53.576 EAL: Hugepages will be freed exactly as allocated. 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: TSC frequency is ~2100000 KHz 00:04:53.576 EAL: Main lcore 0 is ready (tid=7f98219fda00;cpuset=[0]) 00:04:53.576 EAL: Trying to obtain current memory policy. 00:04:53.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.576 EAL: Restoring previous memory policy: 0 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was expanded by 2MB 00:04:53.576 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:53.576 EAL: probe driver: 8086:37d2 net_i40e 00:04:53.576 EAL: Not managed by a supported kernel driver, skipped 00:04:53.576 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:53.576 EAL: probe driver: 8086:37d2 net_i40e 00:04:53.576 EAL: Not managed by a supported kernel driver, skipped 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:53.576 EAL: Mem event callback 'spdk:(nil)' registered 00:04:53.576 00:04:53.576 00:04:53.576 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.576 http://cunit.sourceforge.net/ 00:04:53.576 00:04:53.576 00:04:53.576 Suite: components_suite 00:04:53.576 Test: vtophys_malloc_test ...passed 00:04:53.576 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:53.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.576 EAL: Restoring previous memory policy: 4 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was expanded by 4MB 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was shrunk by 4MB 00:04:53.576 EAL: Trying to obtain current memory policy. 00:04:53.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.576 EAL: Restoring previous memory policy: 4 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was expanded by 6MB 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was shrunk by 6MB 00:04:53.576 EAL: Trying to obtain current memory policy. 00:04:53.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.576 EAL: Restoring previous memory policy: 4 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.576 EAL: Trying to obtain current memory policy. 00:04:53.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.576 EAL: Restoring previous memory policy: 4 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.576 EAL: request: mp_malloc_sync 00:04:53.576 EAL: No shared files mode enabled, IPC is disabled 00:04:53.576 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.576 EAL: Trying to obtain current memory policy. 00:04:53.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.577 EAL: Restoring previous memory policy: 4 00:04:53.577 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.577 EAL: request: mp_malloc_sync 00:04:53.577 EAL: No shared files mode enabled, IPC is disabled 00:04:53.577 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.577 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.577 EAL: request: mp_malloc_sync 00:04:53.577 EAL: No shared files mode enabled, IPC is disabled 00:04:53.577 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.577 EAL: Trying to obtain current memory policy. 00:04:53.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.577 EAL: Restoring previous memory policy: 4 00:04:53.577 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.577 EAL: request: mp_malloc_sync 00:04:53.577 EAL: No shared files mode enabled, IPC is disabled 00:04:53.577 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.577 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.836 EAL: request: mp_malloc_sync 00:04:53.836 EAL: No shared files mode enabled, IPC is disabled 00:04:53.836 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.836 EAL: Trying to obtain current memory policy. 00:04:53.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.836 EAL: Restoring previous memory policy: 4 00:04:53.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.836 EAL: request: mp_malloc_sync 00:04:53.836 EAL: No shared files mode enabled, IPC is disabled 00:04:53.836 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.836 EAL: request: mp_malloc_sync 00:04:53.836 EAL: No shared files mode enabled, IPC is disabled 00:04:53.836 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.836 EAL: Trying to obtain current memory policy. 00:04:53.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.836 EAL: Restoring previous memory policy: 4 00:04:53.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.836 EAL: request: mp_malloc_sync 00:04:53.836 EAL: No shared files mode enabled, IPC is disabled 00:04:53.836 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.836 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.836 EAL: request: mp_malloc_sync 00:04:53.836 EAL: No shared files mode enabled, IPC is disabled 00:04:53.836 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.836 EAL: Trying to obtain current memory policy. 00:04:53.836 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.095 EAL: Restoring previous memory policy: 4 00:04:54.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.095 EAL: request: mp_malloc_sync 00:04:54.095 EAL: No shared files mode enabled, IPC is disabled 00:04:54.095 EAL: Heap on socket 0 was expanded by 514MB 00:04:54.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.095 EAL: request: mp_malloc_sync 00:04:54.095 EAL: No shared files mode enabled, IPC is disabled 00:04:54.095 EAL: Heap on socket 0 was shrunk by 514MB 00:04:54.095 EAL: Trying to obtain current memory policy. 00:04:54.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.353 EAL: Restoring previous memory policy: 4 00:04:54.353 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.353 EAL: request: mp_malloc_sync 00:04:54.353 EAL: No shared files mode enabled, IPC is disabled 00:04:54.353 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.612 EAL: request: mp_malloc_sync 00:04:54.612 EAL: No shared files mode enabled, IPC is disabled 00:04:54.612 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.612 passed 00:04:54.612 00:04:54.612 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.612 suites 1 1 n/a 0 0 00:04:54.612 tests 2 2 2 0 0 00:04:54.612 asserts 497 497 497 0 n/a 00:04:54.612 00:04:54.612 Elapsed time = 0.958 seconds 00:04:54.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.612 EAL: request: mp_malloc_sync 00:04:54.612 EAL: No shared files mode enabled, IPC is disabled 00:04:54.612 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.612 EAL: No shared files mode enabled, IPC is disabled 00:04:54.613 EAL: No shared files mode enabled, IPC is disabled 00:04:54.613 EAL: No shared files mode enabled, IPC is disabled 00:04:54.613 00:04:54.613 real 0m1.068s 00:04:54.613 user 0m0.625s 00:04:54.613 sys 0m0.416s 00:04:54.613 05:33:28 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.613 05:33:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.613 ************************************ 00:04:54.613 END TEST env_vtophys 00:04:54.613 ************************************ 00:04:54.613 05:33:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.613 05:33:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.613 05:33:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.613 05:33:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.613 ************************************ 00:04:54.613 START TEST env_pci 00:04:54.613 ************************************ 00:04:54.613 05:33:28 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.613 00:04:54.613 00:04:54.613 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.613 http://cunit.sourceforge.net/ 00:04:54.613 00:04:54.613 00:04:54.613 Suite: pci 00:04:54.613 Test: pci_hook ...[2024-12-16 05:33:28.459826] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3147650 has claimed it 00:04:54.872 EAL: Cannot find device (10000:00:01.0) 00:04:54.872 EAL: Failed to attach device on primary process 00:04:54.872 passed 00:04:54.872 00:04:54.872 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.872 suites 1 1 n/a 0 0 00:04:54.872 tests 1 1 1 0 0 00:04:54.872 asserts 25 25 25 0 n/a 00:04:54.872 00:04:54.872 Elapsed time = 0.026 seconds 00:04:54.872 00:04:54.872 real 0m0.044s 00:04:54.872 user 0m0.014s 00:04:54.872 sys 0m0.029s 00:04:54.872 05:33:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.872 05:33:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.872 ************************************ 00:04:54.872 END TEST env_pci 00:04:54.872 ************************************ 00:04:54.872 05:33:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.872 05:33:28 env -- env/env.sh@15 -- # uname 00:04:54.872 05:33:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.872 05:33:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.872 05:33:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.872 05:33:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:54.872 05:33:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.872 05:33:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.872 ************************************ 00:04:54.872 START TEST env_dpdk_post_init 00:04:54.872 ************************************ 00:04:54.872 05:33:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.872 EAL: Detected CPU lcores: 96 00:04:54.872 EAL: Detected NUMA nodes: 2 00:04:54.872 EAL: Detected shared linkage of DPDK 00:04:54.872 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.872 EAL: Selected IOVA mode 'VA' 00:04:54.872 EAL: VFIO support initialized 00:04:54.872 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.872 EAL: Using IOMMU type 1 (Type 1) 00:04:54.872 EAL: Ignore mapping IO port bar(1) 00:04:54.872 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:54.872 EAL: Ignore mapping IO port bar(1) 00:04:54.872 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:54.872 EAL: Ignore mapping IO port bar(1) 00:04:54.872 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:54.872 EAL: Ignore mapping IO port bar(1) 00:04:54.872 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:54.872 EAL: Ignore mapping IO port bar(1) 00:04:54.872 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:54.872 EAL: Ignore mapping IO port bar(1) 00:04:54.872 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:55.131 EAL: Ignore mapping IO port bar(1) 00:04:55.131 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:55.131 EAL: Ignore mapping IO port bar(1) 00:04:55.131 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:55.698 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:55.698 EAL: Ignore mapping IO port bar(1) 00:04:55.698 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:55.698 EAL: Ignore mapping IO port bar(1) 00:04:55.698 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:55.698 EAL: Ignore mapping IO port bar(1) 00:04:55.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:55.699 EAL: Ignore mapping IO port bar(1) 00:04:55.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:55.699 EAL: Ignore mapping IO port bar(1) 00:04:55.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:55.699 EAL: Ignore mapping IO port bar(1) 00:04:55.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:55.957 EAL: Ignore mapping IO port bar(1) 00:04:55.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:55.957 EAL: Ignore mapping IO port bar(1) 00:04:55.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:59.243 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:59.243 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:59.243 Starting DPDK initialization... 00:04:59.243 Starting SPDK post initialization... 00:04:59.243 SPDK NVMe probe 00:04:59.243 Attaching to 0000:5e:00.0 00:04:59.243 Attached to 0000:5e:00.0 00:04:59.243 Cleaning up... 00:04:59.243 00:04:59.243 real 0m4.288s 00:04:59.243 user 0m3.226s 00:04:59.243 sys 0m0.135s 00:04:59.243 05:33:32 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.243 05:33:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.243 ************************************ 00:04:59.243 END TEST env_dpdk_post_init 00:04:59.243 ************************************ 00:04:59.243 05:33:32 env -- env/env.sh@26 -- # uname 00:04:59.243 05:33:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:59.243 05:33:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.243 05:33:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.243 05:33:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.243 05:33:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.243 ************************************ 00:04:59.243 START TEST env_mem_callbacks 00:04:59.243 ************************************ 00:04:59.243 05:33:32 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.243 EAL: Detected CPU lcores: 96 00:04:59.243 EAL: Detected NUMA nodes: 2 00:04:59.243 EAL: Detected shared linkage of DPDK 00:04:59.243 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.243 EAL: Selected IOVA mode 'VA' 00:04:59.243 EAL: VFIO support initialized 00:04:59.243 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.243 00:04:59.243 00:04:59.243 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.243 http://cunit.sourceforge.net/ 00:04:59.243 00:04:59.243 00:04:59.243 Suite: memory 00:04:59.243 Test: test ... 00:04:59.243 register 0x200000200000 2097152 00:04:59.243 malloc 3145728 00:04:59.243 register 0x200000400000 4194304 00:04:59.243 buf 0x200000500000 len 3145728 PASSED 00:04:59.243 malloc 64 00:04:59.243 buf 0x2000004fff40 len 64 PASSED 00:04:59.243 malloc 4194304 00:04:59.243 register 0x200000800000 6291456 00:04:59.243 buf 0x200000a00000 len 4194304 PASSED 00:04:59.243 free 0x200000500000 3145728 00:04:59.243 free 0x2000004fff40 64 00:04:59.243 unregister 0x200000400000 4194304 PASSED 00:04:59.243 free 0x200000a00000 4194304 00:04:59.243 unregister 0x200000800000 6291456 PASSED 00:04:59.243 malloc 8388608 00:04:59.243 register 0x200000400000 10485760 00:04:59.243 buf 0x200000600000 len 8388608 PASSED 00:04:59.243 free 0x200000600000 8388608 00:04:59.243 unregister 0x200000400000 10485760 PASSED 00:04:59.243 passed 00:04:59.243 00:04:59.243 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.243 suites 1 1 n/a 0 0 00:04:59.243 tests 1 1 1 0 0 00:04:59.243 asserts 15 15 15 0 n/a 00:04:59.243 00:04:59.243 Elapsed time = 0.005 seconds 00:04:59.243 00:04:59.243 real 0m0.042s 00:04:59.243 user 0m0.017s 00:04:59.243 sys 0m0.025s 00:04:59.243 05:33:32 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.243 05:33:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:59.243 ************************************ 00:04:59.243 END TEST env_mem_callbacks 00:04:59.243 ************************************ 00:04:59.243 00:04:59.243 real 0m6.084s 00:04:59.243 user 0m4.246s 00:04:59.243 sys 0m0.913s 00:04:59.243 05:33:32 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.243 05:33:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.243 ************************************ 00:04:59.243 END TEST env 00:04:59.243 ************************************ 00:04:59.243 05:33:33 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.243 05:33:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.243 05:33:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.243 05:33:33 -- common/autotest_common.sh@10 -- # set +x 00:04:59.243 ************************************ 00:04:59.243 START TEST rpc 00:04:59.243 ************************************ 00:04:59.243 05:33:33 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.502 * Looking for test storage... 00:04:59.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.503 05:33:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.503 05:33:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.503 05:33:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.503 05:33:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.503 05:33:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.503 05:33:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.503 05:33:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.503 05:33:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.503 05:33:33 rpc -- scripts/common.sh@345 -- # : 1 00:04:59.503 05:33:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.503 05:33:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.503 05:33:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.503 05:33:33 rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.503 05:33:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.503 05:33:33 rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.503 05:33:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.503 05:33:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.503 05:33:33 rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.503 05:33:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.503 05:33:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.503 05:33:33 rpc -- scripts/common.sh@368 -- # return 0 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.503 --rc genhtml_branch_coverage=1 00:04:59.503 --rc genhtml_function_coverage=1 00:04:59.503 --rc genhtml_legend=1 00:04:59.503 --rc geninfo_all_blocks=1 00:04:59.503 --rc geninfo_unexecuted_blocks=1 00:04:59.503 00:04:59.503 ' 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.503 --rc genhtml_branch_coverage=1 00:04:59.503 --rc genhtml_function_coverage=1 00:04:59.503 --rc genhtml_legend=1 00:04:59.503 --rc geninfo_all_blocks=1 00:04:59.503 --rc geninfo_unexecuted_blocks=1 00:04:59.503 00:04:59.503 ' 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.503 --rc genhtml_branch_coverage=1 00:04:59.503 --rc genhtml_function_coverage=1 00:04:59.503 --rc genhtml_legend=1 00:04:59.503 --rc geninfo_all_blocks=1 00:04:59.503 --rc geninfo_unexecuted_blocks=1 00:04:59.503 00:04:59.503 ' 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.503 --rc genhtml_branch_coverage=1 00:04:59.503 --rc genhtml_function_coverage=1 00:04:59.503 --rc genhtml_legend=1 00:04:59.503 --rc geninfo_all_blocks=1 00:04:59.503 --rc geninfo_unexecuted_blocks=1 00:04:59.503 00:04:59.503 ' 00:04:59.503 05:33:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3148478 00:04:59.503 05:33:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:59.503 05:33:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.503 05:33:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3148478 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@831 -- # '[' -z 3148478 ']' 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.503 05:33:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.503 [2024-12-16 05:33:33.269534] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:59.503 [2024-12-16 05:33:33.269581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148478 ] 00:04:59.503 [2024-12-16 05:33:33.325553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.762 [2024-12-16 05:33:33.364377] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.762 [2024-12-16 05:33:33.364414] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3148478' to capture a snapshot of events at runtime. 00:04:59.762 [2024-12-16 05:33:33.364422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.762 [2024-12-16 05:33:33.364429] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.762 [2024-12-16 05:33:33.364433] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3148478 for offline analysis/debug. 00:04:59.762 [2024-12-16 05:33:33.364453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.762 05:33:33 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.762 05:33:33 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:59.762 05:33:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.762 05:33:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.762 05:33:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:59.762 05:33:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:59.762 05:33:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.762 05:33:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.762 05:33:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.762 ************************************ 00:04:59.762 START TEST rpc_integrity 00:04:59.762 ************************************ 00:04:59.762 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:59.762 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.762 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.762 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.762 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.762 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.762 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.021 { 00:05:00.021 "name": "Malloc0", 00:05:00.021 "aliases": [ 00:05:00.021 "598cfb8e-8062-4260-af1e-2ff3368c64c2" 00:05:00.021 ], 00:05:00.021 "product_name": "Malloc disk", 00:05:00.021 "block_size": 512, 00:05:00.021 "num_blocks": 16384, 00:05:00.021 "uuid": "598cfb8e-8062-4260-af1e-2ff3368c64c2", 00:05:00.021 "assigned_rate_limits": { 00:05:00.021 "rw_ios_per_sec": 0, 00:05:00.021 "rw_mbytes_per_sec": 0, 00:05:00.021 "r_mbytes_per_sec": 0, 00:05:00.021 "w_mbytes_per_sec": 0 00:05:00.021 }, 00:05:00.021 "claimed": false, 00:05:00.021 "zoned": false, 00:05:00.021 "supported_io_types": { 00:05:00.021 "read": true, 00:05:00.021 "write": true, 00:05:00.021 "unmap": true, 00:05:00.021 "flush": true, 00:05:00.021 "reset": true, 00:05:00.021 "nvme_admin": false, 00:05:00.021 "nvme_io": false, 00:05:00.021 "nvme_io_md": false, 00:05:00.021 "write_zeroes": true, 00:05:00.021 "zcopy": true, 00:05:00.021 "get_zone_info": false, 00:05:00.021 "zone_management": false, 00:05:00.021 "zone_append": false, 00:05:00.021 "compare": false, 00:05:00.021 "compare_and_write": false, 00:05:00.021 "abort": true, 00:05:00.021 "seek_hole": false, 00:05:00.021 "seek_data": false, 00:05:00.021 "copy": true, 00:05:00.021 "nvme_iov_md": false 00:05:00.021 }, 00:05:00.021 "memory_domains": [ 00:05:00.021 { 00:05:00.021 "dma_device_id": "system", 00:05:00.021 "dma_device_type": 1 00:05:00.021 }, 00:05:00.021 { 00:05:00.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.021 "dma_device_type": 2 00:05:00.021 } 00:05:00.021 ], 00:05:00.021 "driver_specific": {} 00:05:00.021 } 00:05:00.021 ]' 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.021 [2024-12-16 05:33:33.712094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.021 [2024-12-16 05:33:33.712126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.021 [2024-12-16 05:33:33.712139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c5cf0 00:05:00.021 [2024-12-16 05:33:33.712145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.021 [2024-12-16 05:33:33.713210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.021 [2024-12-16 05:33:33.713231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.021 Passthru0 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.021 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.021 { 00:05:00.021 "name": "Malloc0", 00:05:00.021 "aliases": [ 00:05:00.021 "598cfb8e-8062-4260-af1e-2ff3368c64c2" 00:05:00.021 ], 00:05:00.021 "product_name": "Malloc disk", 00:05:00.021 "block_size": 512, 00:05:00.021 "num_blocks": 16384, 00:05:00.021 "uuid": "598cfb8e-8062-4260-af1e-2ff3368c64c2", 00:05:00.021 "assigned_rate_limits": { 00:05:00.021 "rw_ios_per_sec": 0, 00:05:00.021 "rw_mbytes_per_sec": 0, 00:05:00.021 "r_mbytes_per_sec": 0, 00:05:00.021 "w_mbytes_per_sec": 0 00:05:00.021 }, 00:05:00.021 "claimed": true, 00:05:00.021 "claim_type": "exclusive_write", 00:05:00.021 "zoned": false, 00:05:00.021 "supported_io_types": { 00:05:00.021 "read": true, 00:05:00.021 "write": true, 00:05:00.021 "unmap": true, 00:05:00.021 "flush": true, 00:05:00.021 "reset": true, 00:05:00.021 "nvme_admin": false, 00:05:00.021 "nvme_io": false, 00:05:00.021 "nvme_io_md": false, 00:05:00.021 "write_zeroes": true, 00:05:00.021 "zcopy": true, 00:05:00.021 "get_zone_info": false, 00:05:00.021 "zone_management": false, 00:05:00.021 "zone_append": false, 00:05:00.021 "compare": false, 00:05:00.021 "compare_and_write": false, 00:05:00.021 "abort": true, 00:05:00.021 "seek_hole": false, 00:05:00.021 "seek_data": false, 00:05:00.021 "copy": true, 00:05:00.021 "nvme_iov_md": false 00:05:00.021 }, 00:05:00.021 "memory_domains": [ 00:05:00.021 { 00:05:00.021 "dma_device_id": "system", 00:05:00.021 "dma_device_type": 1 00:05:00.021 }, 00:05:00.021 { 00:05:00.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.021 "dma_device_type": 2 00:05:00.021 } 00:05:00.021 ], 00:05:00.021 "driver_specific": {} 00:05:00.021 }, 00:05:00.021 { 00:05:00.021 "name": "Passthru0", 00:05:00.021 "aliases": [ 00:05:00.021 "7092d9e4-740c-527c-8e73-f03c61073dbb" 00:05:00.021 ], 00:05:00.021 "product_name": "passthru", 00:05:00.021 "block_size": 512, 00:05:00.021 "num_blocks": 16384, 00:05:00.021 "uuid": "7092d9e4-740c-527c-8e73-f03c61073dbb", 00:05:00.021 "assigned_rate_limits": { 00:05:00.021 "rw_ios_per_sec": 0, 00:05:00.021 "rw_mbytes_per_sec": 0, 00:05:00.021 "r_mbytes_per_sec": 0, 00:05:00.021 "w_mbytes_per_sec": 0 00:05:00.021 }, 00:05:00.021 "claimed": false, 00:05:00.021 "zoned": false, 00:05:00.021 "supported_io_types": { 00:05:00.021 "read": true, 00:05:00.021 "write": true, 00:05:00.021 "unmap": true, 00:05:00.021 "flush": true, 00:05:00.021 "reset": true, 00:05:00.021 "nvme_admin": false, 00:05:00.021 "nvme_io": false, 00:05:00.021 "nvme_io_md": false, 00:05:00.021 "write_zeroes": true, 00:05:00.021 "zcopy": true, 00:05:00.021 "get_zone_info": false, 00:05:00.021 "zone_management": false, 00:05:00.021 "zone_append": false, 00:05:00.021 "compare": false, 00:05:00.021 "compare_and_write": false, 00:05:00.021 "abort": true, 00:05:00.021 "seek_hole": false, 00:05:00.021 "seek_data": false, 00:05:00.021 "copy": true, 00:05:00.021 "nvme_iov_md": false 00:05:00.021 }, 00:05:00.021 "memory_domains": [ 00:05:00.021 { 00:05:00.021 "dma_device_id": "system", 00:05:00.021 "dma_device_type": 1 00:05:00.021 }, 00:05:00.021 { 00:05:00.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.021 "dma_device_type": 2 00:05:00.021 } 00:05:00.021 ], 00:05:00.021 "driver_specific": { 00:05:00.021 "passthru": { 00:05:00.021 "name": "Passthru0", 00:05:00.021 "base_bdev_name": "Malloc0" 00:05:00.021 } 00:05:00.021 } 00:05:00.021 } 00:05:00.021 ]' 00:05:00.021 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.022 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.022 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.022 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.022 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.022 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.022 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.022 05:33:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.022 00:05:00.022 real 0m0.274s 00:05:00.022 user 0m0.172s 00:05:00.022 sys 0m0.039s 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.022 05:33:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.022 ************************************ 00:05:00.022 END TEST rpc_integrity 00:05:00.022 ************************************ 00:05:00.280 05:33:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.280 05:33:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.280 05:33:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.280 05:33:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.280 ************************************ 00:05:00.280 START TEST rpc_plugins 00:05:00.280 ************************************ 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:00.280 05:33:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.280 05:33:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.280 05:33:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.280 05:33:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.280 { 00:05:00.280 "name": "Malloc1", 00:05:00.280 "aliases": [ 00:05:00.280 "4170e453-61e3-4158-8243-12d847abd629" 00:05:00.280 ], 00:05:00.280 "product_name": "Malloc disk", 00:05:00.280 "block_size": 4096, 00:05:00.280 "num_blocks": 256, 00:05:00.280 "uuid": "4170e453-61e3-4158-8243-12d847abd629", 00:05:00.280 "assigned_rate_limits": { 00:05:00.280 "rw_ios_per_sec": 0, 00:05:00.280 "rw_mbytes_per_sec": 0, 00:05:00.280 "r_mbytes_per_sec": 0, 00:05:00.280 "w_mbytes_per_sec": 0 00:05:00.280 }, 00:05:00.280 "claimed": false, 00:05:00.280 "zoned": false, 00:05:00.280 "supported_io_types": { 00:05:00.280 "read": true, 00:05:00.280 "write": true, 00:05:00.280 "unmap": true, 00:05:00.280 "flush": true, 00:05:00.280 "reset": true, 00:05:00.280 "nvme_admin": false, 00:05:00.280 "nvme_io": false, 00:05:00.280 "nvme_io_md": false, 00:05:00.280 "write_zeroes": true, 00:05:00.280 "zcopy": true, 00:05:00.280 "get_zone_info": false, 00:05:00.280 "zone_management": false, 00:05:00.280 "zone_append": false, 00:05:00.280 "compare": false, 00:05:00.280 "compare_and_write": false, 00:05:00.280 "abort": true, 00:05:00.280 "seek_hole": false, 00:05:00.280 "seek_data": false, 00:05:00.280 "copy": true, 00:05:00.280 "nvme_iov_md": false 00:05:00.280 }, 00:05:00.280 "memory_domains": [ 00:05:00.280 { 00:05:00.280 "dma_device_id": "system", 00:05:00.280 "dma_device_type": 1 00:05:00.280 }, 00:05:00.280 { 00:05:00.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.280 "dma_device_type": 2 00:05:00.280 } 00:05:00.280 ], 00:05:00.280 "driver_specific": {} 00:05:00.280 } 00:05:00.280 ]' 00:05:00.280 05:33:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.280 05:33:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.280 05:33:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.280 05:33:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.280 05:33:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.280 05:33:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.280 05:33:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.280 05:33:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.281 05:33:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.281 05:33:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.281 05:33:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.281 05:33:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.281 00:05:00.281 real 0m0.142s 00:05:00.281 user 0m0.084s 00:05:00.281 sys 0m0.023s 00:05:00.281 05:33:34 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.281 05:33:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.281 ************************************ 00:05:00.281 END TEST rpc_plugins 00:05:00.281 ************************************ 00:05:00.281 05:33:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.281 05:33:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.281 05:33:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.281 05:33:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.281 ************************************ 00:05:00.281 START TEST rpc_trace_cmd_test 00:05:00.281 ************************************ 00:05:00.281 05:33:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:00.281 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.281 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.281 05:33:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.281 05:33:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.539 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3148478", 00:05:00.539 "tpoint_group_mask": "0x8", 00:05:00.539 "iscsi_conn": { 00:05:00.539 "mask": "0x2", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "scsi": { 00:05:00.539 "mask": "0x4", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "bdev": { 00:05:00.539 "mask": "0x8", 00:05:00.539 "tpoint_mask": "0xffffffffffffffff" 00:05:00.539 }, 00:05:00.539 "nvmf_rdma": { 00:05:00.539 "mask": "0x10", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "nvmf_tcp": { 00:05:00.539 "mask": "0x20", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "ftl": { 00:05:00.539 "mask": "0x40", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "blobfs": { 00:05:00.539 "mask": "0x80", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "dsa": { 00:05:00.539 "mask": "0x200", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "thread": { 00:05:00.539 "mask": "0x400", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "nvme_pcie": { 00:05:00.539 "mask": "0x800", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "iaa": { 00:05:00.539 "mask": "0x1000", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "nvme_tcp": { 00:05:00.539 "mask": "0x2000", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "bdev_nvme": { 00:05:00.539 "mask": "0x4000", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "sock": { 00:05:00.539 "mask": "0x8000", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "blob": { 00:05:00.539 "mask": "0x10000", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 }, 00:05:00.539 "bdev_raid": { 00:05:00.539 "mask": "0x20000", 00:05:00.539 "tpoint_mask": "0x0" 00:05:00.539 } 00:05:00.539 }' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.539 00:05:00.539 real 0m0.195s 00:05:00.539 user 0m0.165s 00:05:00.539 sys 0m0.019s 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.539 05:33:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.539 ************************************ 00:05:00.539 END TEST rpc_trace_cmd_test 00:05:00.539 ************************************ 00:05:00.539 05:33:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.539 05:33:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.539 05:33:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.539 05:33:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.539 05:33:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.539 05:33:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.539 ************************************ 00:05:00.539 START TEST rpc_daemon_integrity 00:05:00.539 ************************************ 00:05:00.539 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:00.539 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.539 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.539 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.539 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.798 { 00:05:00.798 "name": "Malloc2", 00:05:00.798 "aliases": [ 00:05:00.798 "fdbdff1f-cd50-44be-a89e-e5864dfac7c6" 00:05:00.798 ], 00:05:00.798 "product_name": "Malloc disk", 00:05:00.798 "block_size": 512, 00:05:00.798 "num_blocks": 16384, 00:05:00.798 "uuid": "fdbdff1f-cd50-44be-a89e-e5864dfac7c6", 00:05:00.798 "assigned_rate_limits": { 00:05:00.798 "rw_ios_per_sec": 0, 00:05:00.798 "rw_mbytes_per_sec": 0, 00:05:00.798 "r_mbytes_per_sec": 0, 00:05:00.798 "w_mbytes_per_sec": 0 00:05:00.798 }, 00:05:00.798 "claimed": false, 00:05:00.798 "zoned": false, 00:05:00.798 "supported_io_types": { 00:05:00.798 "read": true, 00:05:00.798 "write": true, 00:05:00.798 "unmap": true, 00:05:00.798 "flush": true, 00:05:00.798 "reset": true, 00:05:00.798 "nvme_admin": false, 00:05:00.798 "nvme_io": false, 00:05:00.798 "nvme_io_md": false, 00:05:00.798 "write_zeroes": true, 00:05:00.798 "zcopy": true, 00:05:00.798 "get_zone_info": false, 00:05:00.798 "zone_management": false, 00:05:00.798 "zone_append": false, 00:05:00.798 "compare": false, 00:05:00.798 "compare_and_write": false, 00:05:00.798 "abort": true, 00:05:00.798 "seek_hole": false, 00:05:00.798 "seek_data": false, 00:05:00.798 "copy": true, 00:05:00.798 "nvme_iov_md": false 00:05:00.798 }, 00:05:00.798 "memory_domains": [ 00:05:00.798 { 00:05:00.798 "dma_device_id": "system", 00:05:00.798 "dma_device_type": 1 00:05:00.798 }, 00:05:00.798 { 00:05:00.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.798 "dma_device_type": 2 00:05:00.798 } 00:05:00.798 ], 00:05:00.798 "driver_specific": {} 00:05:00.798 } 00:05:00.798 ]' 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.798 [2024-12-16 05:33:34.518283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:00.798 [2024-12-16 05:33:34.518309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.798 [2024-12-16 05:33:34.518321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2356610 00:05:00.798 [2024-12-16 05:33:34.518328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.798 [2024-12-16 05:33:34.519261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.798 [2024-12-16 05:33:34.519280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.798 Passthru0 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.798 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.798 { 00:05:00.798 "name": "Malloc2", 00:05:00.798 "aliases": [ 00:05:00.798 "fdbdff1f-cd50-44be-a89e-e5864dfac7c6" 00:05:00.798 ], 00:05:00.798 "product_name": "Malloc disk", 00:05:00.798 "block_size": 512, 00:05:00.798 "num_blocks": 16384, 00:05:00.798 "uuid": "fdbdff1f-cd50-44be-a89e-e5864dfac7c6", 00:05:00.798 "assigned_rate_limits": { 00:05:00.798 "rw_ios_per_sec": 0, 00:05:00.798 "rw_mbytes_per_sec": 0, 00:05:00.798 "r_mbytes_per_sec": 0, 00:05:00.798 "w_mbytes_per_sec": 0 00:05:00.798 }, 00:05:00.798 "claimed": true, 00:05:00.798 "claim_type": "exclusive_write", 00:05:00.798 "zoned": false, 00:05:00.798 "supported_io_types": { 00:05:00.798 "read": true, 00:05:00.798 "write": true, 00:05:00.798 "unmap": true, 00:05:00.799 "flush": true, 00:05:00.799 "reset": true, 00:05:00.799 "nvme_admin": false, 00:05:00.799 "nvme_io": false, 00:05:00.799 "nvme_io_md": false, 00:05:00.799 "write_zeroes": true, 00:05:00.799 "zcopy": true, 00:05:00.799 "get_zone_info": false, 00:05:00.799 "zone_management": false, 00:05:00.799 "zone_append": false, 00:05:00.799 "compare": false, 00:05:00.799 "compare_and_write": false, 00:05:00.799 "abort": true, 00:05:00.799 "seek_hole": false, 00:05:00.799 "seek_data": false, 00:05:00.799 "copy": true, 00:05:00.799 "nvme_iov_md": false 00:05:00.799 }, 00:05:00.799 "memory_domains": [ 00:05:00.799 { 00:05:00.799 "dma_device_id": "system", 00:05:00.799 "dma_device_type": 1 00:05:00.799 }, 00:05:00.799 { 00:05:00.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.799 "dma_device_type": 2 00:05:00.799 } 00:05:00.799 ], 00:05:00.799 "driver_specific": {} 00:05:00.799 }, 00:05:00.799 { 00:05:00.799 "name": "Passthru0", 00:05:00.799 "aliases": [ 00:05:00.799 "c17b7bf2-98c6-5ba0-a419-c63cca3d47d7" 00:05:00.799 ], 00:05:00.799 "product_name": "passthru", 00:05:00.799 "block_size": 512, 00:05:00.799 "num_blocks": 16384, 00:05:00.799 "uuid": "c17b7bf2-98c6-5ba0-a419-c63cca3d47d7", 00:05:00.799 "assigned_rate_limits": { 00:05:00.799 "rw_ios_per_sec": 0, 00:05:00.799 "rw_mbytes_per_sec": 0, 00:05:00.799 "r_mbytes_per_sec": 0, 00:05:00.799 "w_mbytes_per_sec": 0 00:05:00.799 }, 00:05:00.799 "claimed": false, 00:05:00.799 "zoned": false, 00:05:00.799 "supported_io_types": { 00:05:00.799 "read": true, 00:05:00.799 "write": true, 00:05:00.799 "unmap": true, 00:05:00.799 "flush": true, 00:05:00.799 "reset": true, 00:05:00.799 "nvme_admin": false, 00:05:00.799 "nvme_io": false, 00:05:00.799 "nvme_io_md": false, 00:05:00.799 "write_zeroes": true, 00:05:00.799 "zcopy": true, 00:05:00.799 "get_zone_info": false, 00:05:00.799 "zone_management": false, 00:05:00.799 "zone_append": false, 00:05:00.799 "compare": false, 00:05:00.799 "compare_and_write": false, 00:05:00.799 "abort": true, 00:05:00.799 "seek_hole": false, 00:05:00.799 "seek_data": false, 00:05:00.799 "copy": true, 00:05:00.799 "nvme_iov_md": false 00:05:00.799 }, 00:05:00.799 "memory_domains": [ 00:05:00.799 { 00:05:00.799 "dma_device_id": "system", 00:05:00.799 "dma_device_type": 1 00:05:00.799 }, 00:05:00.799 { 00:05:00.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.799 "dma_device_type": 2 00:05:00.799 } 00:05:00.799 ], 00:05:00.799 "driver_specific": { 00:05:00.799 "passthru": { 00:05:00.799 "name": "Passthru0", 00:05:00.799 "base_bdev_name": "Malloc2" 00:05:00.799 } 00:05:00.799 } 00:05:00.799 } 00:05:00.799 ]' 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.799 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.057 05:33:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.057 00:05:01.058 real 0m0.280s 00:05:01.058 user 0m0.173s 00:05:01.058 sys 0m0.040s 00:05:01.058 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.058 05:33:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.058 ************************************ 00:05:01.058 END TEST rpc_daemon_integrity 00:05:01.058 ************************************ 00:05:01.058 05:33:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.058 05:33:34 rpc -- rpc/rpc.sh@84 -- # killprocess 3148478 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@950 -- # '[' -z 3148478 ']' 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@954 -- # kill -0 3148478 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@955 -- # uname 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148478 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148478' 00:05:01.058 killing process with pid 3148478 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@969 -- # kill 3148478 00:05:01.058 05:33:34 rpc -- common/autotest_common.sh@974 -- # wait 3148478 00:05:01.316 00:05:01.316 real 0m2.017s 00:05:01.316 user 0m2.580s 00:05:01.316 sys 0m0.654s 00:05:01.316 05:33:35 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.316 05:33:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.316 ************************************ 00:05:01.316 END TEST rpc 00:05:01.316 ************************************ 00:05:01.316 05:33:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.316 05:33:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.316 05:33:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.316 05:33:35 -- common/autotest_common.sh@10 -- # set +x 00:05:01.316 ************************************ 00:05:01.316 START TEST skip_rpc 00:05:01.316 ************************************ 00:05:01.316 05:33:35 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.573 * Looking for test storage... 00:05:01.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.573 05:33:35 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.573 05:33:35 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.573 05:33:35 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.573 05:33:35 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.573 05:33:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.574 05:33:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.574 05:33:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.574 05:33:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.574 05:33:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.574 05:33:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.574 05:33:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.574 --rc genhtml_branch_coverage=1 00:05:01.574 --rc genhtml_function_coverage=1 00:05:01.574 --rc genhtml_legend=1 00:05:01.574 --rc geninfo_all_blocks=1 00:05:01.574 --rc geninfo_unexecuted_blocks=1 00:05:01.574 00:05:01.574 ' 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.574 --rc genhtml_branch_coverage=1 00:05:01.574 --rc genhtml_function_coverage=1 00:05:01.574 --rc genhtml_legend=1 00:05:01.574 --rc geninfo_all_blocks=1 00:05:01.574 --rc geninfo_unexecuted_blocks=1 00:05:01.574 00:05:01.574 ' 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.574 --rc genhtml_branch_coverage=1 00:05:01.574 --rc genhtml_function_coverage=1 00:05:01.574 --rc genhtml_legend=1 00:05:01.574 --rc geninfo_all_blocks=1 00:05:01.574 --rc geninfo_unexecuted_blocks=1 00:05:01.574 00:05:01.574 ' 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.574 --rc genhtml_branch_coverage=1 00:05:01.574 --rc genhtml_function_coverage=1 00:05:01.574 --rc genhtml_legend=1 00:05:01.574 --rc geninfo_all_blocks=1 00:05:01.574 --rc geninfo_unexecuted_blocks=1 00:05:01.574 00:05:01.574 ' 00:05:01.574 05:33:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.574 05:33:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:01.574 05:33:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.574 05:33:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.574 ************************************ 00:05:01.574 START TEST skip_rpc 00:05:01.574 ************************************ 00:05:01.574 05:33:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:01.574 05:33:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3149099 00:05:01.574 05:33:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.574 05:33:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.574 05:33:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:01.574 [2024-12-16 05:33:35.382578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:01.574 [2024-12-16 05:33:35.382614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149099 ] 00:05:01.832 [2024-12-16 05:33:35.437261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.832 [2024-12-16 05:33:35.475605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3149099 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3149099 ']' 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3149099 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149099 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149099' 00:05:07.099 killing process with pid 3149099 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3149099 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3149099 00:05:07.099 00:05:07.099 real 0m5.382s 00:05:07.099 user 0m5.150s 00:05:07.099 sys 0m0.273s 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.099 05:33:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.099 ************************************ 00:05:07.099 END TEST skip_rpc 00:05:07.099 ************************************ 00:05:07.099 05:33:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.099 05:33:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.099 05:33:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.099 05:33:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.099 ************************************ 00:05:07.099 START TEST skip_rpc_with_json 00:05:07.099 ************************************ 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3150021 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3150021 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3150021 ']' 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.099 05:33:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.099 [2024-12-16 05:33:40.833725] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:07.099 [2024-12-16 05:33:40.833766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150021 ] 00:05:07.099 [2024-12-16 05:33:40.887814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.099 [2024-12-16 05:33:40.927620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.358 [2024-12-16 05:33:41.125147] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:07.358 request: 00:05:07.358 { 00:05:07.358 "trtype": "tcp", 00:05:07.358 "method": "nvmf_get_transports", 00:05:07.358 "req_id": 1 00:05:07.358 } 00:05:07.358 Got JSON-RPC error response 00:05:07.358 response: 00:05:07.358 { 00:05:07.358 "code": -19, 00:05:07.358 "message": "No such device" 00:05:07.358 } 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.358 [2024-12-16 05:33:41.137250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.358 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.617 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.617 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.617 { 00:05:07.617 "subsystems": [ 00:05:07.617 { 00:05:07.617 "subsystem": "fsdev", 00:05:07.617 "config": [ 00:05:07.617 { 00:05:07.617 "method": "fsdev_set_opts", 00:05:07.617 "params": { 00:05:07.617 "fsdev_io_pool_size": 65535, 00:05:07.617 "fsdev_io_cache_size": 256 00:05:07.617 } 00:05:07.617 } 00:05:07.617 ] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "vfio_user_target", 00:05:07.617 "config": null 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "keyring", 00:05:07.617 "config": [] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "iobuf", 00:05:07.617 "config": [ 00:05:07.617 { 00:05:07.617 "method": "iobuf_set_options", 00:05:07.617 "params": { 00:05:07.617 "small_pool_count": 8192, 00:05:07.617 "large_pool_count": 1024, 00:05:07.617 "small_bufsize": 8192, 00:05:07.617 "large_bufsize": 135168 00:05:07.617 } 00:05:07.617 } 00:05:07.617 ] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "sock", 00:05:07.617 "config": [ 00:05:07.617 { 00:05:07.617 "method": "sock_set_default_impl", 00:05:07.617 "params": { 00:05:07.617 "impl_name": "posix" 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "sock_impl_set_options", 00:05:07.617 "params": { 00:05:07.617 "impl_name": "ssl", 00:05:07.617 "recv_buf_size": 4096, 00:05:07.617 "send_buf_size": 4096, 00:05:07.617 "enable_recv_pipe": true, 00:05:07.617 "enable_quickack": false, 00:05:07.617 "enable_placement_id": 0, 00:05:07.617 "enable_zerocopy_send_server": true, 00:05:07.617 "enable_zerocopy_send_client": false, 00:05:07.617 "zerocopy_threshold": 0, 00:05:07.617 "tls_version": 0, 00:05:07.617 "enable_ktls": false 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "sock_impl_set_options", 00:05:07.617 "params": { 00:05:07.617 "impl_name": "posix", 00:05:07.617 "recv_buf_size": 2097152, 00:05:07.617 "send_buf_size": 2097152, 00:05:07.617 "enable_recv_pipe": true, 00:05:07.617 "enable_quickack": false, 00:05:07.617 "enable_placement_id": 0, 00:05:07.617 "enable_zerocopy_send_server": true, 00:05:07.617 "enable_zerocopy_send_client": false, 00:05:07.617 "zerocopy_threshold": 0, 00:05:07.617 "tls_version": 0, 00:05:07.617 "enable_ktls": false 00:05:07.617 } 00:05:07.617 } 00:05:07.617 ] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "vmd", 00:05:07.617 "config": [] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "accel", 00:05:07.617 "config": [ 00:05:07.617 { 00:05:07.617 "method": "accel_set_options", 00:05:07.617 "params": { 00:05:07.617 "small_cache_size": 128, 00:05:07.617 "large_cache_size": 16, 00:05:07.617 "task_count": 2048, 00:05:07.617 "sequence_count": 2048, 00:05:07.617 "buf_count": 2048 00:05:07.617 } 00:05:07.617 } 00:05:07.617 ] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "bdev", 00:05:07.617 "config": [ 00:05:07.617 { 00:05:07.617 "method": "bdev_set_options", 00:05:07.617 "params": { 00:05:07.617 "bdev_io_pool_size": 65535, 00:05:07.617 "bdev_io_cache_size": 256, 00:05:07.617 "bdev_auto_examine": true, 00:05:07.617 "iobuf_small_cache_size": 128, 00:05:07.617 "iobuf_large_cache_size": 16 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "bdev_raid_set_options", 00:05:07.617 "params": { 00:05:07.617 "process_window_size_kb": 1024, 00:05:07.617 "process_max_bandwidth_mb_sec": 0 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "bdev_iscsi_set_options", 00:05:07.617 "params": { 00:05:07.617 "timeout_sec": 30 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "bdev_nvme_set_options", 00:05:07.617 "params": { 00:05:07.617 "action_on_timeout": "none", 00:05:07.617 "timeout_us": 0, 00:05:07.617 "timeout_admin_us": 0, 00:05:07.617 "keep_alive_timeout_ms": 10000, 00:05:07.617 "arbitration_burst": 0, 00:05:07.617 "low_priority_weight": 0, 00:05:07.617 "medium_priority_weight": 0, 00:05:07.617 "high_priority_weight": 0, 00:05:07.617 "nvme_adminq_poll_period_us": 10000, 00:05:07.617 "nvme_ioq_poll_period_us": 0, 00:05:07.617 "io_queue_requests": 0, 00:05:07.617 "delay_cmd_submit": true, 00:05:07.617 "transport_retry_count": 4, 00:05:07.617 "bdev_retry_count": 3, 00:05:07.617 "transport_ack_timeout": 0, 00:05:07.617 "ctrlr_loss_timeout_sec": 0, 00:05:07.617 "reconnect_delay_sec": 0, 00:05:07.617 "fast_io_fail_timeout_sec": 0, 00:05:07.617 "disable_auto_failback": false, 00:05:07.617 "generate_uuids": false, 00:05:07.617 "transport_tos": 0, 00:05:07.617 "nvme_error_stat": false, 00:05:07.617 "rdma_srq_size": 0, 00:05:07.617 "io_path_stat": false, 00:05:07.617 "allow_accel_sequence": false, 00:05:07.617 "rdma_max_cq_size": 0, 00:05:07.617 "rdma_cm_event_timeout_ms": 0, 00:05:07.617 "dhchap_digests": [ 00:05:07.617 "sha256", 00:05:07.617 "sha384", 00:05:07.617 "sha512" 00:05:07.617 ], 00:05:07.617 "dhchap_dhgroups": [ 00:05:07.617 "null", 00:05:07.617 "ffdhe2048", 00:05:07.617 "ffdhe3072", 00:05:07.617 "ffdhe4096", 00:05:07.617 "ffdhe6144", 00:05:07.617 "ffdhe8192" 00:05:07.617 ] 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "bdev_nvme_set_hotplug", 00:05:07.617 "params": { 00:05:07.617 "period_us": 100000, 00:05:07.617 "enable": false 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "bdev_wait_for_examine" 00:05:07.617 } 00:05:07.617 ] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "scsi", 00:05:07.617 "config": null 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "scheduler", 00:05:07.617 "config": [ 00:05:07.617 { 00:05:07.617 "method": "framework_set_scheduler", 00:05:07.617 "params": { 00:05:07.617 "name": "static" 00:05:07.617 } 00:05:07.617 } 00:05:07.617 ] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "vhost_scsi", 00:05:07.617 "config": [] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "vhost_blk", 00:05:07.617 "config": [] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "ublk", 00:05:07.617 "config": [] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "nbd", 00:05:07.617 "config": [] 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "subsystem": "nvmf", 00:05:07.617 "config": [ 00:05:07.617 { 00:05:07.617 "method": "nvmf_set_config", 00:05:07.617 "params": { 00:05:07.617 "discovery_filter": "match_any", 00:05:07.617 "admin_cmd_passthru": { 00:05:07.617 "identify_ctrlr": false 00:05:07.617 }, 00:05:07.617 "dhchap_digests": [ 00:05:07.617 "sha256", 00:05:07.617 "sha384", 00:05:07.617 "sha512" 00:05:07.617 ], 00:05:07.617 "dhchap_dhgroups": [ 00:05:07.617 "null", 00:05:07.617 "ffdhe2048", 00:05:07.617 "ffdhe3072", 00:05:07.617 "ffdhe4096", 00:05:07.617 "ffdhe6144", 00:05:07.617 "ffdhe8192" 00:05:07.617 ] 00:05:07.617 } 00:05:07.617 }, 00:05:07.617 { 00:05:07.617 "method": "nvmf_set_max_subsystems", 00:05:07.617 "params": { 00:05:07.618 "max_subsystems": 1024 00:05:07.618 } 00:05:07.618 }, 00:05:07.618 { 00:05:07.618 "method": "nvmf_set_crdt", 00:05:07.618 "params": { 00:05:07.618 "crdt1": 0, 00:05:07.618 "crdt2": 0, 00:05:07.618 "crdt3": 0 00:05:07.618 } 00:05:07.618 }, 00:05:07.618 { 00:05:07.618 "method": "nvmf_create_transport", 00:05:07.618 "params": { 00:05:07.618 "trtype": "TCP", 00:05:07.618 "max_queue_depth": 128, 00:05:07.618 "max_io_qpairs_per_ctrlr": 127, 00:05:07.618 "in_capsule_data_size": 4096, 00:05:07.618 "max_io_size": 131072, 00:05:07.618 "io_unit_size": 131072, 00:05:07.618 "max_aq_depth": 128, 00:05:07.618 "num_shared_buffers": 511, 00:05:07.618 "buf_cache_size": 4294967295, 00:05:07.618 "dif_insert_or_strip": false, 00:05:07.618 "zcopy": false, 00:05:07.618 "c2h_success": true, 00:05:07.618 "sock_priority": 0, 00:05:07.618 "abort_timeout_sec": 1, 00:05:07.618 "ack_timeout": 0, 00:05:07.618 "data_wr_pool_size": 0 00:05:07.618 } 00:05:07.618 } 00:05:07.618 ] 00:05:07.618 }, 00:05:07.618 { 00:05:07.618 "subsystem": "iscsi", 00:05:07.618 "config": [ 00:05:07.618 { 00:05:07.618 "method": "iscsi_set_options", 00:05:07.618 "params": { 00:05:07.618 "node_base": "iqn.2016-06.io.spdk", 00:05:07.618 "max_sessions": 128, 00:05:07.618 "max_connections_per_session": 2, 00:05:07.618 "max_queue_depth": 64, 00:05:07.618 "default_time2wait": 2, 00:05:07.618 "default_time2retain": 20, 00:05:07.618 "first_burst_length": 8192, 00:05:07.618 "immediate_data": true, 00:05:07.618 "allow_duplicated_isid": false, 00:05:07.618 "error_recovery_level": 0, 00:05:07.618 "nop_timeout": 60, 00:05:07.618 "nop_in_interval": 30, 00:05:07.618 "disable_chap": false, 00:05:07.618 "require_chap": false, 00:05:07.618 "mutual_chap": false, 00:05:07.618 "chap_group": 0, 00:05:07.618 "max_large_datain_per_connection": 64, 00:05:07.618 "max_r2t_per_connection": 4, 00:05:07.618 "pdu_pool_size": 36864, 00:05:07.618 "immediate_data_pool_size": 16384, 00:05:07.618 "data_out_pool_size": 2048 00:05:07.618 } 00:05:07.618 } 00:05:07.618 ] 00:05:07.618 } 00:05:07.618 ] 00:05:07.618 } 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3150021 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3150021 ']' 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3150021 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3150021 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3150021' 00:05:07.618 killing process with pid 3150021 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3150021 00:05:07.618 05:33:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3150021 00:05:07.877 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3150095 00:05:07.877 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.877 05:33:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3150095 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3150095 ']' 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3150095 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3150095 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3150095' 00:05:13.145 killing process with pid 3150095 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3150095 00:05:13.145 05:33:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3150095 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.404 00:05:13.404 real 0m6.269s 00:05:13.404 user 0m5.993s 00:05:13.404 sys 0m0.571s 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.404 ************************************ 00:05:13.404 END TEST skip_rpc_with_json 00:05:13.404 ************************************ 00:05:13.404 05:33:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:13.404 05:33:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.404 05:33:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.404 05:33:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.404 ************************************ 00:05:13.404 START TEST skip_rpc_with_delay 00:05:13.404 ************************************ 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.404 [2024-12-16 05:33:47.172948] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:13.404 [2024-12-16 05:33:47.173009] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.404 00:05:13.404 real 0m0.068s 00:05:13.404 user 0m0.041s 00:05:13.404 sys 0m0.027s 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.404 05:33:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:13.404 ************************************ 00:05:13.404 END TEST skip_rpc_with_delay 00:05:13.404 ************************************ 00:05:13.404 05:33:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:13.404 05:33:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:13.404 05:33:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:13.404 05:33:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.404 05:33:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.404 05:33:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.404 ************************************ 00:05:13.404 START TEST exit_on_failed_rpc_init 00:05:13.404 ************************************ 00:05:13.404 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:13.404 05:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3151155 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3151155 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3151155 ']' 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.663 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.663 [2024-12-16 05:33:47.307461] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:13.663 [2024-12-16 05:33:47.307501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151155 ] 00:05:13.663 [2024-12-16 05:33:47.362154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.663 [2024-12-16 05:33:47.402332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:13.922 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:13.922 [2024-12-16 05:33:47.649381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:13.922 [2024-12-16 05:33:47.649423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151210 ] 00:05:13.922 [2024-12-16 05:33:47.703211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.922 [2024-12-16 05:33:47.741690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.922 [2024-12-16 05:33:47.741754] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:13.922 [2024-12-16 05:33:47.741765] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:13.922 [2024-12-16 05:33:47.741773] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3151155 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3151155 ']' 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3151155 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3151155 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3151155' 00:05:14.181 killing process with pid 3151155 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3151155 00:05:14.181 05:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3151155 00:05:14.440 00:05:14.440 real 0m0.908s 00:05:14.440 user 0m0.956s 00:05:14.440 sys 0m0.387s 00:05:14.440 05:33:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.440 05:33:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 ************************************ 00:05:14.440 END TEST exit_on_failed_rpc_init 00:05:14.440 ************************************ 00:05:14.440 05:33:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.440 00:05:14.440 real 0m13.066s 00:05:14.440 user 0m12.349s 00:05:14.440 sys 0m1.514s 00:05:14.440 05:33:48 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.440 05:33:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 ************************************ 00:05:14.440 END TEST skip_rpc 00:05:14.440 ************************************ 00:05:14.440 05:33:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.440 05:33:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.440 05:33:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.440 05:33:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 ************************************ 00:05:14.440 START TEST rpc_client 00:05:14.440 ************************************ 00:05:14.440 05:33:48 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.699 * Looking for test storage... 00:05:14.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.699 05:33:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:14.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.699 --rc genhtml_branch_coverage=1 00:05:14.699 --rc genhtml_function_coverage=1 00:05:14.699 --rc genhtml_legend=1 00:05:14.699 --rc geninfo_all_blocks=1 00:05:14.699 --rc geninfo_unexecuted_blocks=1 00:05:14.699 00:05:14.699 ' 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:14.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.699 --rc genhtml_branch_coverage=1 00:05:14.699 --rc genhtml_function_coverage=1 00:05:14.699 --rc genhtml_legend=1 00:05:14.699 --rc geninfo_all_blocks=1 00:05:14.699 --rc geninfo_unexecuted_blocks=1 00:05:14.699 00:05:14.699 ' 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:14.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.699 --rc genhtml_branch_coverage=1 00:05:14.699 --rc genhtml_function_coverage=1 00:05:14.699 --rc genhtml_legend=1 00:05:14.699 --rc geninfo_all_blocks=1 00:05:14.699 --rc geninfo_unexecuted_blocks=1 00:05:14.699 00:05:14.699 ' 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:14.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.699 --rc genhtml_branch_coverage=1 00:05:14.699 --rc genhtml_function_coverage=1 00:05:14.699 --rc genhtml_legend=1 00:05:14.699 --rc geninfo_all_blocks=1 00:05:14.699 --rc geninfo_unexecuted_blocks=1 00:05:14.699 00:05:14.699 ' 00:05:14.699 05:33:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:14.699 OK 00:05:14.699 05:33:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.699 00:05:14.699 real 0m0.198s 00:05:14.699 user 0m0.123s 00:05:14.699 sys 0m0.088s 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.699 05:33:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.699 ************************************ 00:05:14.699 END TEST rpc_client 00:05:14.699 ************************************ 00:05:14.699 05:33:48 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:14.699 05:33:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.699 05:33:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.699 05:33:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.699 ************************************ 00:05:14.699 START TEST json_config 00:05:14.699 ************************************ 00:05:14.699 05:33:48 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.959 05:33:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.959 05:33:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.959 05:33:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.959 05:33:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.959 05:33:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.959 05:33:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.959 05:33:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.959 05:33:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:14.959 05:33:48 json_config -- scripts/common.sh@345 -- # : 1 00:05:14.959 05:33:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.959 05:33:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.959 05:33:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:14.959 05:33:48 json_config -- scripts/common.sh@353 -- # local d=1 00:05:14.959 05:33:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.959 05:33:48 json_config -- scripts/common.sh@355 -- # echo 1 00:05:14.959 05:33:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.959 05:33:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@353 -- # local d=2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.959 05:33:48 json_config -- scripts/common.sh@355 -- # echo 2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.959 05:33:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.959 05:33:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.959 05:33:48 json_config -- scripts/common.sh@368 -- # return 0 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.959 --rc genhtml_branch_coverage=1 00:05:14.959 --rc genhtml_function_coverage=1 00:05:14.959 --rc genhtml_legend=1 00:05:14.959 --rc geninfo_all_blocks=1 00:05:14.959 --rc geninfo_unexecuted_blocks=1 00:05:14.959 00:05:14.959 ' 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.959 --rc genhtml_branch_coverage=1 00:05:14.959 --rc genhtml_function_coverage=1 00:05:14.959 --rc genhtml_legend=1 00:05:14.959 --rc geninfo_all_blocks=1 00:05:14.959 --rc geninfo_unexecuted_blocks=1 00:05:14.959 00:05:14.959 ' 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.959 --rc genhtml_branch_coverage=1 00:05:14.959 --rc genhtml_function_coverage=1 00:05:14.959 --rc genhtml_legend=1 00:05:14.959 --rc geninfo_all_blocks=1 00:05:14.959 --rc geninfo_unexecuted_blocks=1 00:05:14.959 00:05:14.959 ' 00:05:14.959 05:33:48 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.959 --rc genhtml_branch_coverage=1 00:05:14.959 --rc genhtml_function_coverage=1 00:05:14.959 --rc genhtml_legend=1 00:05:14.959 --rc geninfo_all_blocks=1 00:05:14.959 --rc geninfo_unexecuted_blocks=1 00:05:14.959 00:05:14.959 ' 00:05:14.959 05:33:48 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.959 05:33:48 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.959 05:33:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.959 05:33:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.959 05:33:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.959 05:33:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.960 05:33:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.960 05:33:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.960 05:33:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.960 05:33:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.960 05:33:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@51 -- # : 0 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.960 05:33:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:14.960 INFO: JSON configuration test init 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.960 05:33:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:14.960 05:33:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.960 05:33:48 json_config -- json_config/common.sh@10 -- # shift 00:05:14.960 05:33:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.960 05:33:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.960 05:33:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.960 05:33:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.960 05:33:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.960 05:33:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3151558 00:05:14.960 05:33:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.960 Waiting for target to run... 00:05:14.960 05:33:48 json_config -- json_config/common.sh@25 -- # waitforlisten 3151558 /var/tmp/spdk_tgt.sock 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@831 -- # '[' -z 3151558 ']' 00:05:14.960 05:33:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.960 05:33:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.960 [2024-12-16 05:33:48.773180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:14.960 [2024-12-16 05:33:48.773230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151558 ] 00:05:15.527 [2024-12-16 05:33:49.201024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.527 [2024-12-16 05:33:49.233278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.786 05:33:49 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.786 05:33:49 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:15.786 05:33:49 json_config -- json_config/common.sh@26 -- # echo '' 00:05:15.786 00:05:15.786 05:33:49 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:15.786 05:33:49 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:15.786 05:33:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.786 05:33:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.786 05:33:49 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:15.786 05:33:49 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:15.786 05:33:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.786 05:33:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.786 05:33:49 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:15.786 05:33:49 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:15.786 05:33:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:19.073 05:33:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.073 05:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:19.073 05:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:19.073 05:33:52 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@54 -- # sort 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:19.074 05:33:52 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:19.074 05:33:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.074 05:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:19.332 05:33:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.332 05:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:19.332 05:33:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.332 05:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.332 MallocForNvmf0 00:05:19.332 05:33:53 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.332 05:33:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.591 MallocForNvmf1 00:05:19.591 05:33:53 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.591 05:33:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.849 [2024-12-16 05:33:53.482449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.849 05:33:53 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.849 05:33:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.849 05:33:53 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.849 05:33:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.108 05:33:53 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.108 05:33:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.366 05:33:54 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.366 05:33:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.624 [2024-12-16 05:33:54.248778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.624 05:33:54 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:20.624 05:33:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.624 05:33:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.624 05:33:54 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:20.624 05:33:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.624 05:33:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.624 05:33:54 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:20.624 05:33:54 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.624 05:33:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.883 MallocBdevForConfigChangeCheck 00:05:20.883 05:33:54 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:20.883 05:33:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.883 05:33:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.883 05:33:54 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:20.883 05:33:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.142 05:33:54 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:21.142 INFO: shutting down applications... 00:05:21.142 05:33:54 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:21.142 05:33:54 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:21.142 05:33:54 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:21.142 05:33:54 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:23.048 Calling clear_iscsi_subsystem 00:05:23.048 Calling clear_nvmf_subsystem 00:05:23.048 Calling clear_nbd_subsystem 00:05:23.048 Calling clear_ublk_subsystem 00:05:23.048 Calling clear_vhost_blk_subsystem 00:05:23.048 Calling clear_vhost_scsi_subsystem 00:05:23.048 Calling clear_bdev_subsystem 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@352 -- # break 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:23.048 05:33:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:23.048 05:33:56 json_config -- json_config/common.sh@31 -- # local app=target 00:05:23.048 05:33:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.048 05:33:56 json_config -- json_config/common.sh@35 -- # [[ -n 3151558 ]] 00:05:23.048 05:33:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3151558 00:05:23.048 05:33:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.048 05:33:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.048 05:33:56 json_config -- json_config/common.sh@41 -- # kill -0 3151558 00:05:23.048 05:33:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.617 05:33:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.617 05:33:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.617 05:33:57 json_config -- json_config/common.sh@41 -- # kill -0 3151558 00:05:23.617 05:33:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.617 05:33:57 json_config -- json_config/common.sh@43 -- # break 00:05:23.617 05:33:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.617 05:33:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.617 SPDK target shutdown done 00:05:23.617 05:33:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:23.617 INFO: relaunching applications... 00:05:23.617 05:33:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.617 05:33:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:23.617 05:33:57 json_config -- json_config/common.sh@10 -- # shift 00:05:23.617 05:33:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.617 05:33:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.617 05:33:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.617 05:33:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.617 05:33:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.617 05:33:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3153039 00:05:23.617 05:33:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.617 Waiting for target to run... 00:05:23.617 05:33:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.617 05:33:57 json_config -- json_config/common.sh@25 -- # waitforlisten 3153039 /var/tmp/spdk_tgt.sock 00:05:23.617 05:33:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 3153039 ']' 00:05:23.617 05:33:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.617 05:33:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.617 05:33:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.617 05:33:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.617 05:33:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.617 [2024-12-16 05:33:57.339479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:23.617 [2024-12-16 05:33:57.339536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153039 ] 00:05:24.185 [2024-12-16 05:33:57.773999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.185 [2024-12-16 05:33:57.804304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.473 [2024-12-16 05:34:00.812662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.473 [2024-12-16 05:34:00.844945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:27.731 05:34:01 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.731 05:34:01 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:27.731 05:34:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.731 00:05:27.731 05:34:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:27.731 05:34:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:27.731 INFO: Checking if target configuration is the same... 00:05:27.731 05:34:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:27.731 05:34:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.732 05:34:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.732 + '[' 2 -ne 2 ']' 00:05:27.732 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.732 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.732 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.732 +++ basename /dev/fd/62 00:05:27.732 ++ mktemp /tmp/62.XXX 00:05:27.732 + tmp_file_1=/tmp/62.skb 00:05:27.732 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.732 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.732 + tmp_file_2=/tmp/spdk_tgt_config.json.irs 00:05:27.732 + ret=0 00:05:27.732 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.298 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.298 + diff -u /tmp/62.skb /tmp/spdk_tgt_config.json.irs 00:05:28.298 + echo 'INFO: JSON config files are the same' 00:05:28.298 INFO: JSON config files are the same 00:05:28.298 + rm /tmp/62.skb /tmp/spdk_tgt_config.json.irs 00:05:28.298 + exit 0 00:05:28.298 05:34:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:28.298 05:34:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:28.298 INFO: changing configuration and checking if this can be detected... 00:05:28.298 05:34:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.298 05:34:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.298 05:34:02 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.298 05:34:02 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:28.298 05:34:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.299 + '[' 2 -ne 2 ']' 00:05:28.299 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.299 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:28.299 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:28.299 +++ basename /dev/fd/62 00:05:28.299 ++ mktemp /tmp/62.XXX 00:05:28.299 + tmp_file_1=/tmp/62.Fa7 00:05:28.299 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.299 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.557 + tmp_file_2=/tmp/spdk_tgt_config.json.wrr 00:05:28.557 + ret=0 00:05:28.557 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.816 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:28.816 + diff -u /tmp/62.Fa7 /tmp/spdk_tgt_config.json.wrr 00:05:28.816 + ret=1 00:05:28.816 + echo '=== Start of file: /tmp/62.Fa7 ===' 00:05:28.816 + cat /tmp/62.Fa7 00:05:28.816 + echo '=== End of file: /tmp/62.Fa7 ===' 00:05:28.816 + echo '' 00:05:28.816 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wrr ===' 00:05:28.816 + cat /tmp/spdk_tgt_config.json.wrr 00:05:28.816 + echo '=== End of file: /tmp/spdk_tgt_config.json.wrr ===' 00:05:28.816 + echo '' 00:05:28.816 + rm /tmp/62.Fa7 /tmp/spdk_tgt_config.json.wrr 00:05:28.816 + exit 1 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:28.816 INFO: configuration change detected. 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 3153039 ]] 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.816 05:34:02 json_config -- json_config/json_config.sh@330 -- # killprocess 3153039 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@950 -- # '[' -z 3153039 ']' 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@954 -- # kill -0 3153039 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@955 -- # uname 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3153039 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3153039' 00:05:28.816 killing process with pid 3153039 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@969 -- # kill 3153039 00:05:28.816 05:34:02 json_config -- common/autotest_common.sh@974 -- # wait 3153039 00:05:30.722 05:34:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.722 05:34:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:30.722 05:34:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.722 05:34:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.722 05:34:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:30.723 05:34:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:30.723 INFO: Success 00:05:30.723 00:05:30.723 real 0m15.626s 00:05:30.723 user 0m16.612s 00:05:30.723 sys 0m2.037s 00:05:30.723 05:34:04 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.723 05:34:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 ************************************ 00:05:30.723 END TEST json_config 00:05:30.723 ************************************ 00:05:30.723 05:34:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.723 05:34:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.723 05:34:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.723 05:34:04 -- common/autotest_common.sh@10 -- # set +x 00:05:30.723 ************************************ 00:05:30.723 START TEST json_config_extra_key 00:05:30.723 ************************************ 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.723 05:34:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.723 --rc genhtml_branch_coverage=1 00:05:30.723 --rc genhtml_function_coverage=1 00:05:30.723 --rc genhtml_legend=1 00:05:30.723 --rc geninfo_all_blocks=1 00:05:30.723 --rc geninfo_unexecuted_blocks=1 00:05:30.723 00:05:30.723 ' 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.723 --rc genhtml_branch_coverage=1 00:05:30.723 --rc genhtml_function_coverage=1 00:05:30.723 --rc genhtml_legend=1 00:05:30.723 --rc geninfo_all_blocks=1 00:05:30.723 --rc geninfo_unexecuted_blocks=1 00:05:30.723 00:05:30.723 ' 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.723 --rc genhtml_branch_coverage=1 00:05:30.723 --rc genhtml_function_coverage=1 00:05:30.723 --rc genhtml_legend=1 00:05:30.723 --rc geninfo_all_blocks=1 00:05:30.723 --rc geninfo_unexecuted_blocks=1 00:05:30.723 00:05:30.723 ' 00:05:30.723 05:34:04 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.723 --rc genhtml_branch_coverage=1 00:05:30.723 --rc genhtml_function_coverage=1 00:05:30.723 --rc genhtml_legend=1 00:05:30.723 --rc geninfo_all_blocks=1 00:05:30.723 --rc geninfo_unexecuted_blocks=1 00:05:30.723 00:05:30.723 ' 00:05:30.723 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.723 05:34:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.724 05:34:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.724 05:34:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.724 05:34:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.724 05:34:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.724 05:34:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.724 05:34:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.724 05:34:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.724 05:34:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:30.724 05:34:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.724 05:34:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:30.724 INFO: launching applications... 00:05:30.724 05:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3154285 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.724 Waiting for target to run... 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3154285 /var/tmp/spdk_tgt.sock 00:05:30.724 05:34:04 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3154285 ']' 00:05:30.724 05:34:04 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.724 05:34:04 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.724 05:34:04 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.724 05:34:04 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.724 05:34:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.724 05:34:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.724 [2024-12-16 05:34:04.424569] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:30.724 [2024-12-16 05:34:04.424615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154285 ] 00:05:30.983 [2024-12-16 05:34:04.696049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.983 [2024-12-16 05:34:04.719863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.550 05:34:05 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.550 05:34:05 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:31.550 00:05:31.550 05:34:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:31.550 INFO: shutting down applications... 00:05:31.550 05:34:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3154285 ]] 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3154285 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3154285 00:05:31.550 05:34:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.118 05:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.118 05:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.118 05:34:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3154285 00:05:32.118 05:34:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.118 05:34:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:32.118 05:34:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.118 05:34:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.118 SPDK target shutdown done 00:05:32.118 05:34:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:32.118 Success 00:05:32.118 00:05:32.118 real 0m1.526s 00:05:32.118 user 0m1.316s 00:05:32.118 sys 0m0.385s 00:05:32.118 05:34:05 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.118 05:34:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.118 ************************************ 00:05:32.118 END TEST json_config_extra_key 00:05:32.118 ************************************ 00:05:32.118 05:34:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.118 05:34:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.118 05:34:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.118 05:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:32.118 ************************************ 00:05:32.118 START TEST alias_rpc 00:05:32.118 ************************************ 00:05:32.118 05:34:05 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.118 * Looking for test storage... 00:05:32.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:32.118 05:34:05 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.118 05:34:05 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.118 05:34:05 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.377 05:34:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.377 --rc genhtml_branch_coverage=1 00:05:32.377 --rc genhtml_function_coverage=1 00:05:32.377 --rc genhtml_legend=1 00:05:32.377 --rc geninfo_all_blocks=1 00:05:32.377 --rc geninfo_unexecuted_blocks=1 00:05:32.377 00:05:32.377 ' 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.377 --rc genhtml_branch_coverage=1 00:05:32.377 --rc genhtml_function_coverage=1 00:05:32.377 --rc genhtml_legend=1 00:05:32.377 --rc geninfo_all_blocks=1 00:05:32.377 --rc geninfo_unexecuted_blocks=1 00:05:32.377 00:05:32.377 ' 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.377 --rc genhtml_branch_coverage=1 00:05:32.377 --rc genhtml_function_coverage=1 00:05:32.377 --rc genhtml_legend=1 00:05:32.377 --rc geninfo_all_blocks=1 00:05:32.377 --rc geninfo_unexecuted_blocks=1 00:05:32.377 00:05:32.377 ' 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.377 --rc genhtml_branch_coverage=1 00:05:32.377 --rc genhtml_function_coverage=1 00:05:32.377 --rc genhtml_legend=1 00:05:32.377 --rc geninfo_all_blocks=1 00:05:32.377 --rc geninfo_unexecuted_blocks=1 00:05:32.377 00:05:32.377 ' 00:05:32.377 05:34:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:32.377 05:34:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3154701 00:05:32.377 05:34:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.377 05:34:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3154701 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3154701 ']' 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.377 05:34:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.377 [2024-12-16 05:34:06.042438] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:32.377 [2024-12-16 05:34:06.042486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154701 ] 00:05:32.377 [2024-12-16 05:34:06.097984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.377 [2024-12-16 05:34:06.138269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.636 05:34:06 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.636 05:34:06 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.636 05:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:32.895 05:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3154701 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3154701 ']' 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3154701 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3154701 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3154701' 00:05:32.895 killing process with pid 3154701 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@969 -- # kill 3154701 00:05:32.895 05:34:06 alias_rpc -- common/autotest_common.sh@974 -- # wait 3154701 00:05:33.155 00:05:33.155 real 0m1.099s 00:05:33.155 user 0m1.117s 00:05:33.155 sys 0m0.408s 00:05:33.155 05:34:06 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.155 05:34:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.155 ************************************ 00:05:33.155 END TEST alias_rpc 00:05:33.155 ************************************ 00:05:33.155 05:34:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:33.155 05:34:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:33.155 05:34:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.155 05:34:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.155 05:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:33.155 ************************************ 00:05:33.155 START TEST spdkcli_tcp 00:05:33.155 ************************************ 00:05:33.155 05:34:06 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:33.415 * Looking for test storage... 00:05:33.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.415 05:34:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.415 --rc genhtml_branch_coverage=1 00:05:33.415 --rc genhtml_function_coverage=1 00:05:33.415 --rc genhtml_legend=1 00:05:33.415 --rc geninfo_all_blocks=1 00:05:33.415 --rc geninfo_unexecuted_blocks=1 00:05:33.415 00:05:33.415 ' 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.415 --rc genhtml_branch_coverage=1 00:05:33.415 --rc genhtml_function_coverage=1 00:05:33.415 --rc genhtml_legend=1 00:05:33.415 --rc geninfo_all_blocks=1 00:05:33.415 --rc geninfo_unexecuted_blocks=1 00:05:33.415 00:05:33.415 ' 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.415 --rc genhtml_branch_coverage=1 00:05:33.415 --rc genhtml_function_coverage=1 00:05:33.415 --rc genhtml_legend=1 00:05:33.415 --rc geninfo_all_blocks=1 00:05:33.415 --rc geninfo_unexecuted_blocks=1 00:05:33.415 00:05:33.415 ' 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.415 --rc genhtml_branch_coverage=1 00:05:33.415 --rc genhtml_function_coverage=1 00:05:33.415 --rc genhtml_legend=1 00:05:33.415 --rc geninfo_all_blocks=1 00:05:33.415 --rc geninfo_unexecuted_blocks=1 00:05:33.415 00:05:33.415 ' 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3154870 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3154870 00:05:33.415 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3154870 ']' 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.415 05:34:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.415 [2024-12-16 05:34:07.215213] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:33.415 [2024-12-16 05:34:07.215259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154870 ] 00:05:33.415 [2024-12-16 05:34:07.269482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.674 [2024-12-16 05:34:07.310933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.674 [2024-12-16 05:34:07.310938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.674 05:34:07 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.674 05:34:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:33.674 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3155068 00:05:33.674 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:33.674 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.934 [ 00:05:33.934 "bdev_malloc_delete", 00:05:33.934 "bdev_malloc_create", 00:05:33.934 "bdev_null_resize", 00:05:33.934 "bdev_null_delete", 00:05:33.934 "bdev_null_create", 00:05:33.934 "bdev_nvme_cuse_unregister", 00:05:33.934 "bdev_nvme_cuse_register", 00:05:33.934 "bdev_opal_new_user", 00:05:33.934 "bdev_opal_set_lock_state", 00:05:33.934 "bdev_opal_delete", 00:05:33.934 "bdev_opal_get_info", 00:05:33.934 "bdev_opal_create", 00:05:33.934 "bdev_nvme_opal_revert", 00:05:33.934 "bdev_nvme_opal_init", 00:05:33.934 "bdev_nvme_send_cmd", 00:05:33.934 "bdev_nvme_set_keys", 00:05:33.934 "bdev_nvme_get_path_iostat", 00:05:33.934 "bdev_nvme_get_mdns_discovery_info", 00:05:33.934 "bdev_nvme_stop_mdns_discovery", 00:05:33.934 "bdev_nvme_start_mdns_discovery", 00:05:33.934 "bdev_nvme_set_multipath_policy", 00:05:33.934 "bdev_nvme_set_preferred_path", 00:05:33.934 "bdev_nvme_get_io_paths", 00:05:33.934 "bdev_nvme_remove_error_injection", 00:05:33.934 "bdev_nvme_add_error_injection", 00:05:33.934 "bdev_nvme_get_discovery_info", 00:05:33.934 "bdev_nvme_stop_discovery", 00:05:33.934 "bdev_nvme_start_discovery", 00:05:33.934 "bdev_nvme_get_controller_health_info", 00:05:33.934 "bdev_nvme_disable_controller", 00:05:33.934 "bdev_nvme_enable_controller", 00:05:33.934 "bdev_nvme_reset_controller", 00:05:33.934 "bdev_nvme_get_transport_statistics", 00:05:33.934 "bdev_nvme_apply_firmware", 00:05:33.934 "bdev_nvme_detach_controller", 00:05:33.934 "bdev_nvme_get_controllers", 00:05:33.934 "bdev_nvme_attach_controller", 00:05:33.934 "bdev_nvme_set_hotplug", 00:05:33.934 "bdev_nvme_set_options", 00:05:33.934 "bdev_passthru_delete", 00:05:33.934 "bdev_passthru_create", 00:05:33.934 "bdev_lvol_set_parent_bdev", 00:05:33.934 "bdev_lvol_set_parent", 00:05:33.934 "bdev_lvol_check_shallow_copy", 00:05:33.934 "bdev_lvol_start_shallow_copy", 00:05:33.934 "bdev_lvol_grow_lvstore", 00:05:33.934 "bdev_lvol_get_lvols", 00:05:33.934 "bdev_lvol_get_lvstores", 00:05:33.934 "bdev_lvol_delete", 00:05:33.934 "bdev_lvol_set_read_only", 00:05:33.934 "bdev_lvol_resize", 00:05:33.934 "bdev_lvol_decouple_parent", 00:05:33.934 "bdev_lvol_inflate", 00:05:33.934 "bdev_lvol_rename", 00:05:33.934 "bdev_lvol_clone_bdev", 00:05:33.934 "bdev_lvol_clone", 00:05:33.934 "bdev_lvol_snapshot", 00:05:33.934 "bdev_lvol_create", 00:05:33.934 "bdev_lvol_delete_lvstore", 00:05:33.934 "bdev_lvol_rename_lvstore", 00:05:33.934 "bdev_lvol_create_lvstore", 00:05:33.934 "bdev_raid_set_options", 00:05:33.934 "bdev_raid_remove_base_bdev", 00:05:33.934 "bdev_raid_add_base_bdev", 00:05:33.934 "bdev_raid_delete", 00:05:33.934 "bdev_raid_create", 00:05:33.934 "bdev_raid_get_bdevs", 00:05:33.934 "bdev_error_inject_error", 00:05:33.934 "bdev_error_delete", 00:05:33.934 "bdev_error_create", 00:05:33.934 "bdev_split_delete", 00:05:33.934 "bdev_split_create", 00:05:33.934 "bdev_delay_delete", 00:05:33.934 "bdev_delay_create", 00:05:33.934 "bdev_delay_update_latency", 00:05:33.934 "bdev_zone_block_delete", 00:05:33.934 "bdev_zone_block_create", 00:05:33.934 "blobfs_create", 00:05:33.934 "blobfs_detect", 00:05:33.934 "blobfs_set_cache_size", 00:05:33.934 "bdev_aio_delete", 00:05:33.934 "bdev_aio_rescan", 00:05:33.934 "bdev_aio_create", 00:05:33.934 "bdev_ftl_set_property", 00:05:33.934 "bdev_ftl_get_properties", 00:05:33.934 "bdev_ftl_get_stats", 00:05:33.934 "bdev_ftl_unmap", 00:05:33.934 "bdev_ftl_unload", 00:05:33.934 "bdev_ftl_delete", 00:05:33.934 "bdev_ftl_load", 00:05:33.934 "bdev_ftl_create", 00:05:33.934 "bdev_virtio_attach_controller", 00:05:33.934 "bdev_virtio_scsi_get_devices", 00:05:33.934 "bdev_virtio_detach_controller", 00:05:33.934 "bdev_virtio_blk_set_hotplug", 00:05:33.934 "bdev_iscsi_delete", 00:05:33.934 "bdev_iscsi_create", 00:05:33.934 "bdev_iscsi_set_options", 00:05:33.934 "accel_error_inject_error", 00:05:33.934 "ioat_scan_accel_module", 00:05:33.934 "dsa_scan_accel_module", 00:05:33.934 "iaa_scan_accel_module", 00:05:33.934 "vfu_virtio_create_fs_endpoint", 00:05:33.934 "vfu_virtio_create_scsi_endpoint", 00:05:33.934 "vfu_virtio_scsi_remove_target", 00:05:33.934 "vfu_virtio_scsi_add_target", 00:05:33.934 "vfu_virtio_create_blk_endpoint", 00:05:33.934 "vfu_virtio_delete_endpoint", 00:05:33.934 "keyring_file_remove_key", 00:05:33.934 "keyring_file_add_key", 00:05:33.934 "keyring_linux_set_options", 00:05:33.934 "fsdev_aio_delete", 00:05:33.934 "fsdev_aio_create", 00:05:33.934 "iscsi_get_histogram", 00:05:33.934 "iscsi_enable_histogram", 00:05:33.934 "iscsi_set_options", 00:05:33.934 "iscsi_get_auth_groups", 00:05:33.934 "iscsi_auth_group_remove_secret", 00:05:33.934 "iscsi_auth_group_add_secret", 00:05:33.934 "iscsi_delete_auth_group", 00:05:33.934 "iscsi_create_auth_group", 00:05:33.934 "iscsi_set_discovery_auth", 00:05:33.934 "iscsi_get_options", 00:05:33.934 "iscsi_target_node_request_logout", 00:05:33.934 "iscsi_target_node_set_redirect", 00:05:33.934 "iscsi_target_node_set_auth", 00:05:33.934 "iscsi_target_node_add_lun", 00:05:33.934 "iscsi_get_stats", 00:05:33.934 "iscsi_get_connections", 00:05:33.934 "iscsi_portal_group_set_auth", 00:05:33.934 "iscsi_start_portal_group", 00:05:33.934 "iscsi_delete_portal_group", 00:05:33.934 "iscsi_create_portal_group", 00:05:33.934 "iscsi_get_portal_groups", 00:05:33.934 "iscsi_delete_target_node", 00:05:33.934 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.934 "iscsi_target_node_add_pg_ig_maps", 00:05:33.934 "iscsi_create_target_node", 00:05:33.934 "iscsi_get_target_nodes", 00:05:33.934 "iscsi_delete_initiator_group", 00:05:33.934 "iscsi_initiator_group_remove_initiators", 00:05:33.934 "iscsi_initiator_group_add_initiators", 00:05:33.934 "iscsi_create_initiator_group", 00:05:33.934 "iscsi_get_initiator_groups", 00:05:33.934 "nvmf_set_crdt", 00:05:33.934 "nvmf_set_config", 00:05:33.934 "nvmf_set_max_subsystems", 00:05:33.934 "nvmf_stop_mdns_prr", 00:05:33.934 "nvmf_publish_mdns_prr", 00:05:33.934 "nvmf_subsystem_get_listeners", 00:05:33.934 "nvmf_subsystem_get_qpairs", 00:05:33.934 "nvmf_subsystem_get_controllers", 00:05:33.934 "nvmf_get_stats", 00:05:33.934 "nvmf_get_transports", 00:05:33.934 "nvmf_create_transport", 00:05:33.934 "nvmf_get_targets", 00:05:33.934 "nvmf_delete_target", 00:05:33.934 "nvmf_create_target", 00:05:33.934 "nvmf_subsystem_allow_any_host", 00:05:33.934 "nvmf_subsystem_set_keys", 00:05:33.934 "nvmf_subsystem_remove_host", 00:05:33.934 "nvmf_subsystem_add_host", 00:05:33.934 "nvmf_ns_remove_host", 00:05:33.934 "nvmf_ns_add_host", 00:05:33.934 "nvmf_subsystem_remove_ns", 00:05:33.934 "nvmf_subsystem_set_ns_ana_group", 00:05:33.934 "nvmf_subsystem_add_ns", 00:05:33.934 "nvmf_subsystem_listener_set_ana_state", 00:05:33.934 "nvmf_discovery_get_referrals", 00:05:33.934 "nvmf_discovery_remove_referral", 00:05:33.934 "nvmf_discovery_add_referral", 00:05:33.934 "nvmf_subsystem_remove_listener", 00:05:33.934 "nvmf_subsystem_add_listener", 00:05:33.934 "nvmf_delete_subsystem", 00:05:33.934 "nvmf_create_subsystem", 00:05:33.934 "nvmf_get_subsystems", 00:05:33.934 "env_dpdk_get_mem_stats", 00:05:33.934 "nbd_get_disks", 00:05:33.934 "nbd_stop_disk", 00:05:33.934 "nbd_start_disk", 00:05:33.934 "ublk_recover_disk", 00:05:33.934 "ublk_get_disks", 00:05:33.934 "ublk_stop_disk", 00:05:33.934 "ublk_start_disk", 00:05:33.934 "ublk_destroy_target", 00:05:33.934 "ublk_create_target", 00:05:33.934 "virtio_blk_create_transport", 00:05:33.934 "virtio_blk_get_transports", 00:05:33.934 "vhost_controller_set_coalescing", 00:05:33.934 "vhost_get_controllers", 00:05:33.934 "vhost_delete_controller", 00:05:33.934 "vhost_create_blk_controller", 00:05:33.934 "vhost_scsi_controller_remove_target", 00:05:33.935 "vhost_scsi_controller_add_target", 00:05:33.935 "vhost_start_scsi_controller", 00:05:33.935 "vhost_create_scsi_controller", 00:05:33.935 "thread_set_cpumask", 00:05:33.935 "scheduler_set_options", 00:05:33.935 "framework_get_governor", 00:05:33.935 "framework_get_scheduler", 00:05:33.935 "framework_set_scheduler", 00:05:33.935 "framework_get_reactors", 00:05:33.935 "thread_get_io_channels", 00:05:33.935 "thread_get_pollers", 00:05:33.935 "thread_get_stats", 00:05:33.935 "framework_monitor_context_switch", 00:05:33.935 "spdk_kill_instance", 00:05:33.935 "log_enable_timestamps", 00:05:33.935 "log_get_flags", 00:05:33.935 "log_clear_flag", 00:05:33.935 "log_set_flag", 00:05:33.935 "log_get_level", 00:05:33.935 "log_set_level", 00:05:33.935 "log_get_print_level", 00:05:33.935 "log_set_print_level", 00:05:33.935 "framework_enable_cpumask_locks", 00:05:33.935 "framework_disable_cpumask_locks", 00:05:33.935 "framework_wait_init", 00:05:33.935 "framework_start_init", 00:05:33.935 "scsi_get_devices", 00:05:33.935 "bdev_get_histogram", 00:05:33.935 "bdev_enable_histogram", 00:05:33.935 "bdev_set_qos_limit", 00:05:33.935 "bdev_set_qd_sampling_period", 00:05:33.935 "bdev_get_bdevs", 00:05:33.935 "bdev_reset_iostat", 00:05:33.935 "bdev_get_iostat", 00:05:33.935 "bdev_examine", 00:05:33.935 "bdev_wait_for_examine", 00:05:33.935 "bdev_set_options", 00:05:33.935 "accel_get_stats", 00:05:33.935 "accel_set_options", 00:05:33.935 "accel_set_driver", 00:05:33.935 "accel_crypto_key_destroy", 00:05:33.935 "accel_crypto_keys_get", 00:05:33.935 "accel_crypto_key_create", 00:05:33.935 "accel_assign_opc", 00:05:33.935 "accel_get_module_info", 00:05:33.935 "accel_get_opc_assignments", 00:05:33.935 "vmd_rescan", 00:05:33.935 "vmd_remove_device", 00:05:33.935 "vmd_enable", 00:05:33.935 "sock_get_default_impl", 00:05:33.935 "sock_set_default_impl", 00:05:33.935 "sock_impl_set_options", 00:05:33.935 "sock_impl_get_options", 00:05:33.935 "iobuf_get_stats", 00:05:33.935 "iobuf_set_options", 00:05:33.935 "keyring_get_keys", 00:05:33.935 "vfu_tgt_set_base_path", 00:05:33.935 "framework_get_pci_devices", 00:05:33.935 "framework_get_config", 00:05:33.935 "framework_get_subsystems", 00:05:33.935 "fsdev_set_opts", 00:05:33.935 "fsdev_get_opts", 00:05:33.935 "trace_get_info", 00:05:33.935 "trace_get_tpoint_group_mask", 00:05:33.935 "trace_disable_tpoint_group", 00:05:33.935 "trace_enable_tpoint_group", 00:05:33.935 "trace_clear_tpoint_mask", 00:05:33.935 "trace_set_tpoint_mask", 00:05:33.935 "notify_get_notifications", 00:05:33.935 "notify_get_types", 00:05:33.935 "spdk_get_version", 00:05:33.935 "rpc_get_methods" 00:05:33.935 ] 00:05:33.935 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.935 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.935 05:34:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3154870 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3154870 ']' 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3154870 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3154870 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3154870' 00:05:33.935 killing process with pid 3154870 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3154870 00:05:33.935 05:34:07 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3154870 00:05:34.503 00:05:34.503 real 0m1.104s 00:05:34.503 user 0m1.830s 00:05:34.503 sys 0m0.443s 00:05:34.503 05:34:08 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.503 05:34:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.503 ************************************ 00:05:34.503 END TEST spdkcli_tcp 00:05:34.503 ************************************ 00:05:34.503 05:34:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.503 05:34:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.503 05:34:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.503 05:34:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.503 ************************************ 00:05:34.503 START TEST dpdk_mem_utility 00:05:34.503 ************************************ 00:05:34.503 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.503 * Looking for test storage... 00:05:34.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:34.503 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.503 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.503 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.503 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.503 05:34:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.503 05:34:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.503 05:34:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.504 05:34:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.504 --rc genhtml_branch_coverage=1 00:05:34.504 --rc genhtml_function_coverage=1 00:05:34.504 --rc genhtml_legend=1 00:05:34.504 --rc geninfo_all_blocks=1 00:05:34.504 --rc geninfo_unexecuted_blocks=1 00:05:34.504 00:05:34.504 ' 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.504 --rc genhtml_branch_coverage=1 00:05:34.504 --rc genhtml_function_coverage=1 00:05:34.504 --rc genhtml_legend=1 00:05:34.504 --rc geninfo_all_blocks=1 00:05:34.504 --rc geninfo_unexecuted_blocks=1 00:05:34.504 00:05:34.504 ' 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.504 --rc genhtml_branch_coverage=1 00:05:34.504 --rc genhtml_function_coverage=1 00:05:34.504 --rc genhtml_legend=1 00:05:34.504 --rc geninfo_all_blocks=1 00:05:34.504 --rc geninfo_unexecuted_blocks=1 00:05:34.504 00:05:34.504 ' 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.504 --rc genhtml_branch_coverage=1 00:05:34.504 --rc genhtml_function_coverage=1 00:05:34.504 --rc genhtml_legend=1 00:05:34.504 --rc geninfo_all_blocks=1 00:05:34.504 --rc geninfo_unexecuted_blocks=1 00:05:34.504 00:05:34.504 ' 00:05:34.504 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.504 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3155156 00:05:34.504 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3155156 00:05:34.504 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3155156 ']' 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.504 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.762 [2024-12-16 05:34:08.382980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:34.762 [2024-12-16 05:34:08.383026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155156 ] 00:05:34.762 [2024-12-16 05:34:08.438694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.762 [2024-12-16 05:34:08.476989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.022 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.022 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:35.022 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:35.022 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:35.022 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.022 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.022 { 00:05:35.022 "filename": "/tmp/spdk_mem_dump.txt" 00:05:35.022 } 00:05:35.022 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.022 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.022 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:35.022 1 heaps totaling size 860.000000 MiB 00:05:35.022 size: 860.000000 MiB heap id: 0 00:05:35.022 end heaps---------- 00:05:35.022 9 mempools totaling size 642.649841 MiB 00:05:35.022 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:35.022 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:35.022 size: 92.545471 MiB name: bdev_io_3155156 00:05:35.022 size: 51.011292 MiB name: evtpool_3155156 00:05:35.022 size: 50.003479 MiB name: msgpool_3155156 00:05:35.022 size: 36.509338 MiB name: fsdev_io_3155156 00:05:35.022 size: 21.763794 MiB name: PDU_Pool 00:05:35.022 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:35.022 size: 0.026123 MiB name: Session_Pool 00:05:35.022 end mempools------- 00:05:35.022 6 memzones totaling size 4.142822 MiB 00:05:35.022 size: 1.000366 MiB name: RG_ring_0_3155156 00:05:35.022 size: 1.000366 MiB name: RG_ring_1_3155156 00:05:35.022 size: 1.000366 MiB name: RG_ring_4_3155156 00:05:35.022 size: 1.000366 MiB name: RG_ring_5_3155156 00:05:35.022 size: 0.125366 MiB name: RG_ring_2_3155156 00:05:35.022 size: 0.015991 MiB name: RG_ring_3_3155156 00:05:35.022 end memzones------- 00:05:35.022 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:35.022 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:35.022 list of free elements. size: 13.984680 MiB 00:05:35.022 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:35.022 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:35.022 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:35.022 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:35.022 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:35.022 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:35.022 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:35.022 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:35.022 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:35.022 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:35.022 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:35.022 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:35.022 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:35.022 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:35.022 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:35.022 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:35.022 list of standard malloc elements. size: 199.218628 MiB 00:05:35.022 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:35.022 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:35.022 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:35.022 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:35.022 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:35.022 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:35.022 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:35.022 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:35.022 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:35.022 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:35.022 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:35.022 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:35.022 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:35.022 list of memzone associated elements. size: 646.796692 MiB 00:05:35.022 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:35.022 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:35.022 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:35.022 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:35.022 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:35.022 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3155156_0 00:05:35.022 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:35.022 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3155156_0 00:05:35.022 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:35.022 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3155156_0 00:05:35.022 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:35.022 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3155156_0 00:05:35.022 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:35.022 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:35.022 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:35.022 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:35.022 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:35.022 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3155156 00:05:35.022 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:35.022 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3155156 00:05:35.022 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:35.022 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3155156 00:05:35.022 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:35.022 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:35.022 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:35.022 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:35.022 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:35.023 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:35.023 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:35.023 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:35.023 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:35.023 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3155156 00:05:35.023 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:35.023 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3155156 00:05:35.023 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:35.023 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3155156 00:05:35.023 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:35.023 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3155156 00:05:35.023 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:35.023 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3155156 00:05:35.023 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:35.023 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3155156 00:05:35.023 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:35.023 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:35.023 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:35.023 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:35.023 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:35.023 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:35.023 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:35.023 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3155156 00:05:35.023 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:35.023 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:35.023 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:35.023 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:35.023 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:35.023 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3155156 00:05:35.023 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:35.023 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:35.023 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:35.023 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3155156 00:05:35.023 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:35.023 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3155156 00:05:35.023 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:35.023 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3155156 00:05:35.023 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:35.023 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:35.023 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:35.023 05:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3155156 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3155156 ']' 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3155156 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3155156 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3155156' 00:05:35.023 killing process with pid 3155156 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3155156 00:05:35.023 05:34:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3155156 00:05:35.591 00:05:35.591 real 0m0.988s 00:05:35.591 user 0m0.929s 00:05:35.591 sys 0m0.380s 00:05:35.591 05:34:09 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.591 05:34:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.591 ************************************ 00:05:35.591 END TEST dpdk_mem_utility 00:05:35.591 ************************************ 00:05:35.591 05:34:09 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.591 05:34:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.591 05:34:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.591 05:34:09 -- common/autotest_common.sh@10 -- # set +x 00:05:35.591 ************************************ 00:05:35.591 START TEST event 00:05:35.591 ************************************ 00:05:35.591 05:34:09 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:35.591 * Looking for test storage... 00:05:35.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:35.591 05:34:09 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.591 05:34:09 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.591 05:34:09 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.591 05:34:09 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.591 05:34:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.591 05:34:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.592 05:34:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.592 05:34:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.592 05:34:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.592 05:34:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.592 05:34:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.592 05:34:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.592 05:34:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.592 05:34:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.592 05:34:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.592 05:34:09 event -- scripts/common.sh@344 -- # case "$op" in 00:05:35.592 05:34:09 event -- scripts/common.sh@345 -- # : 1 00:05:35.592 05:34:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.592 05:34:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.592 05:34:09 event -- scripts/common.sh@365 -- # decimal 1 00:05:35.592 05:34:09 event -- scripts/common.sh@353 -- # local d=1 00:05:35.592 05:34:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.592 05:34:09 event -- scripts/common.sh@355 -- # echo 1 00:05:35.592 05:34:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.592 05:34:09 event -- scripts/common.sh@366 -- # decimal 2 00:05:35.592 05:34:09 event -- scripts/common.sh@353 -- # local d=2 00:05:35.592 05:34:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.592 05:34:09 event -- scripts/common.sh@355 -- # echo 2 00:05:35.592 05:34:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.592 05:34:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.592 05:34:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.592 05:34:09 event -- scripts/common.sh@368 -- # return 0 00:05:35.592 05:34:09 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.592 05:34:09 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.592 --rc genhtml_branch_coverage=1 00:05:35.592 --rc genhtml_function_coverage=1 00:05:35.592 --rc genhtml_legend=1 00:05:35.592 --rc geninfo_all_blocks=1 00:05:35.592 --rc geninfo_unexecuted_blocks=1 00:05:35.592 00:05:35.592 ' 00:05:35.592 05:34:09 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.592 --rc genhtml_branch_coverage=1 00:05:35.592 --rc genhtml_function_coverage=1 00:05:35.592 --rc genhtml_legend=1 00:05:35.592 --rc geninfo_all_blocks=1 00:05:35.592 --rc geninfo_unexecuted_blocks=1 00:05:35.592 00:05:35.592 ' 00:05:35.592 05:34:09 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.592 --rc genhtml_branch_coverage=1 00:05:35.592 --rc genhtml_function_coverage=1 00:05:35.592 --rc genhtml_legend=1 00:05:35.592 --rc geninfo_all_blocks=1 00:05:35.592 --rc geninfo_unexecuted_blocks=1 00:05:35.592 00:05:35.592 ' 00:05:35.592 05:34:09 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.592 --rc genhtml_branch_coverage=1 00:05:35.592 --rc genhtml_function_coverage=1 00:05:35.592 --rc genhtml_legend=1 00:05:35.592 --rc geninfo_all_blocks=1 00:05:35.592 --rc geninfo_unexecuted_blocks=1 00:05:35.592 00:05:35.592 ' 00:05:35.592 05:34:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:35.592 05:34:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.592 05:34:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.592 05:34:09 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:35.592 05:34:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.592 05:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.592 ************************************ 00:05:35.592 START TEST event_perf 00:05:35.592 ************************************ 00:05:35.592 05:34:09 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.592 Running I/O for 1 seconds...[2024-12-16 05:34:09.416185] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:35.592 [2024-12-16 05:34:09.416255] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155441 ] 00:05:35.851 [2024-12-16 05:34:09.476929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.851 [2024-12-16 05:34:09.518446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.851 [2024-12-16 05:34:09.518543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.851 [2024-12-16 05:34:09.518631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.851 [2024-12-16 05:34:09.518632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.784 Running I/O for 1 seconds... 00:05:36.784 lcore 0: 207366 00:05:36.784 lcore 1: 207363 00:05:36.784 lcore 2: 207364 00:05:36.784 lcore 3: 207365 00:05:36.784 done. 00:05:36.784 00:05:36.784 real 0m1.187s 00:05:36.784 user 0m4.097s 00:05:36.784 sys 0m0.087s 00:05:36.784 05:34:10 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.784 05:34:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.784 ************************************ 00:05:36.784 END TEST event_perf 00:05:36.784 ************************************ 00:05:36.784 05:34:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:36.784 05:34:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:36.784 05:34:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.784 05:34:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.043 ************************************ 00:05:37.043 START TEST event_reactor 00:05:37.043 ************************************ 00:05:37.043 05:34:10 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.043 [2024-12-16 05:34:10.668543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:37.043 [2024-12-16 05:34:10.668607] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155686 ] 00:05:37.043 [2024-12-16 05:34:10.728477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.043 [2024-12-16 05:34:10.766362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.046 test_start 00:05:38.046 oneshot 00:05:38.046 tick 100 00:05:38.046 tick 100 00:05:38.046 tick 250 00:05:38.046 tick 100 00:05:38.046 tick 100 00:05:38.046 tick 100 00:05:38.046 tick 250 00:05:38.046 tick 500 00:05:38.046 tick 100 00:05:38.046 tick 100 00:05:38.046 tick 250 00:05:38.046 tick 100 00:05:38.046 tick 100 00:05:38.046 test_end 00:05:38.046 00:05:38.046 real 0m1.177s 00:05:38.046 user 0m1.096s 00:05:38.046 sys 0m0.078s 00:05:38.046 05:34:11 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.046 05:34:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:38.046 ************************************ 00:05:38.046 END TEST event_reactor 00:05:38.046 ************************************ 00:05:38.046 05:34:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.046 05:34:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:38.046 05:34:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.046 05:34:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.383 ************************************ 00:05:38.383 START TEST event_reactor_perf 00:05:38.383 ************************************ 00:05:38.383 05:34:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.383 [2024-12-16 05:34:11.910473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:38.383 [2024-12-16 05:34:11.910535] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155930 ] 00:05:38.383 [2024-12-16 05:34:11.968856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.383 [2024-12-16 05:34:12.007046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.320 test_start 00:05:39.320 test_end 00:05:39.320 Performance: 517573 events per second 00:05:39.320 00:05:39.320 real 0m1.175s 00:05:39.320 user 0m1.094s 00:05:39.320 sys 0m0.076s 00:05:39.320 05:34:13 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.320 05:34:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.320 ************************************ 00:05:39.320 END TEST event_reactor_perf 00:05:39.320 ************************************ 00:05:39.320 05:34:13 event -- event/event.sh@49 -- # uname -s 00:05:39.320 05:34:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:39.320 05:34:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.320 05:34:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.320 05:34:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.320 05:34:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.320 ************************************ 00:05:39.320 START TEST event_scheduler 00:05:39.320 ************************************ 00:05:39.320 05:34:13 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:39.579 * Looking for test storage... 00:05:39.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.579 05:34:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.579 --rc genhtml_branch_coverage=1 00:05:39.579 --rc genhtml_function_coverage=1 00:05:39.579 --rc genhtml_legend=1 00:05:39.579 --rc geninfo_all_blocks=1 00:05:39.579 --rc geninfo_unexecuted_blocks=1 00:05:39.579 00:05:39.579 ' 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.579 --rc genhtml_branch_coverage=1 00:05:39.579 --rc genhtml_function_coverage=1 00:05:39.579 --rc genhtml_legend=1 00:05:39.579 --rc geninfo_all_blocks=1 00:05:39.579 --rc geninfo_unexecuted_blocks=1 00:05:39.579 00:05:39.579 ' 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.579 --rc genhtml_branch_coverage=1 00:05:39.579 --rc genhtml_function_coverage=1 00:05:39.579 --rc genhtml_legend=1 00:05:39.579 --rc geninfo_all_blocks=1 00:05:39.579 --rc geninfo_unexecuted_blocks=1 00:05:39.579 00:05:39.579 ' 00:05:39.579 05:34:13 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.579 --rc genhtml_branch_coverage=1 00:05:39.579 --rc genhtml_function_coverage=1 00:05:39.579 --rc genhtml_legend=1 00:05:39.579 --rc geninfo_all_blocks=1 00:05:39.579 --rc geninfo_unexecuted_blocks=1 00:05:39.579 00:05:39.579 ' 00:05:39.579 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:39.580 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3156216 00:05:39.580 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.580 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:39.580 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3156216 00:05:39.580 05:34:13 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3156216 ']' 00:05:39.580 05:34:13 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.580 05:34:13 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.580 05:34:13 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.580 05:34:13 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.580 05:34:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.580 [2024-12-16 05:34:13.345692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:39.580 [2024-12-16 05:34:13.345740] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156216 ] 00:05:39.580 [2024-12-16 05:34:13.396766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.839 [2024-12-16 05:34:13.438650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.839 [2024-12-16 05:34:13.438748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.839 [2024-12-16 05:34:13.438857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.839 [2024-12-16 05:34:13.438858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:39.839 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 [2024-12-16 05:34:13.515447] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:39.839 [2024-12-16 05:34:13.515465] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:39.839 [2024-12-16 05:34:13.515474] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:39.839 [2024-12-16 05:34:13.515480] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:39.839 [2024-12-16 05:34:13.515485] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 [2024-12-16 05:34:13.588227] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 ************************************ 00:05:39.839 START TEST scheduler_create_thread 00:05:39.839 ************************************ 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 2 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 3 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 4 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 5 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 6 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 7 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.839 8 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.839 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.098 9 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.098 10 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.098 05:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.358 05:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.358 05:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.358 05:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.358 05:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.358 05:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.294 05:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.294 05:34:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.294 05:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.294 05:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.228 05:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.228 05:34:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.228 05:34:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.228 05:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.228 05:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.163 05:34:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.163 00:05:43.163 real 0m3.231s 00:05:43.163 user 0m0.027s 00:05:43.163 sys 0m0.003s 00:05:43.163 05:34:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.163 05:34:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.163 ************************************ 00:05:43.163 END TEST scheduler_create_thread 00:05:43.163 ************************************ 00:05:43.163 05:34:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.163 05:34:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3156216 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3156216 ']' 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3156216 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3156216 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3156216' 00:05:43.163 killing process with pid 3156216 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3156216 00:05:43.163 05:34:16 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3156216 00:05:43.421 [2024-12-16 05:34:17.233248] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:43.680 00:05:43.680 real 0m4.344s 00:05:43.680 user 0m7.566s 00:05:43.680 sys 0m0.348s 00:05:43.680 05:34:17 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.680 05:34:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.680 ************************************ 00:05:43.680 END TEST event_scheduler 00:05:43.680 ************************************ 00:05:43.680 05:34:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:43.680 05:34:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:43.680 05:34:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.680 05:34:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.680 05:34:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.940 ************************************ 00:05:43.940 START TEST app_repeat 00:05:43.940 ************************************ 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3156939 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3156939' 00:05:43.940 Process app_repeat pid: 3156939 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:43.940 spdk_app_start Round 0 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3156939 /var/tmp/spdk-nbd.sock 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3156939 ']' 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.940 [2024-12-16 05:34:17.587141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:43.940 [2024-12-16 05:34:17.587198] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156939 ] 00:05:43.940 [2024-12-16 05:34:17.646077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.940 [2024-12-16 05:34:17.686113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.940 [2024-12-16 05:34:17.686116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.940 05:34:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:43.940 05:34:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.199 Malloc0 00:05:44.199 05:34:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.457 Malloc1 00:05:44.457 05:34:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.457 05:34:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.458 05:34:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.458 05:34:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.458 05:34:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.458 05:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.458 05:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.458 05:34:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.716 /dev/nbd0 00:05:44.716 05:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.716 05:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.716 1+0 records in 00:05:44.716 1+0 records out 00:05:44.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188056 s, 21.8 MB/s 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.716 05:34:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.716 05:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.716 05:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.716 05:34:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.975 /dev/nbd1 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.975 1+0 records in 00:05:44.975 1+0 records out 00:05:44.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191991 s, 21.3 MB/s 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.975 05:34:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.975 05:34:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.975 { 00:05:44.975 "nbd_device": "/dev/nbd0", 00:05:44.975 "bdev_name": "Malloc0" 00:05:44.975 }, 00:05:44.975 { 00:05:44.975 "nbd_device": "/dev/nbd1", 00:05:44.975 "bdev_name": "Malloc1" 00:05:44.975 } 00:05:44.975 ]' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.235 { 00:05:45.235 "nbd_device": "/dev/nbd0", 00:05:45.235 "bdev_name": "Malloc0" 00:05:45.235 }, 00:05:45.235 { 00:05:45.235 "nbd_device": "/dev/nbd1", 00:05:45.235 "bdev_name": "Malloc1" 00:05:45.235 } 00:05:45.235 ]' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.235 /dev/nbd1' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.235 /dev/nbd1' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.235 256+0 records in 00:05:45.235 256+0 records out 00:05:45.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995975 s, 105 MB/s 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.235 256+0 records in 00:05:45.235 256+0 records out 00:05:45.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136562 s, 76.8 MB/s 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.235 256+0 records in 00:05:45.235 256+0 records out 00:05:45.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143083 s, 73.3 MB/s 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.235 05:34:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.494 05:34:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.753 05:34:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.753 05:34:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.012 05:34:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.270 [2024-12-16 05:34:19.957818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.270 [2024-12-16 05:34:19.993199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.270 [2024-12-16 05:34:19.993200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.270 [2024-12-16 05:34:20.035015] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.270 [2024-12-16 05:34:20.035053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.557 05:34:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.557 05:34:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:49.557 spdk_app_start Round 1 00:05:49.557 05:34:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3156939 /var/tmp/spdk-nbd.sock 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3156939 ']' 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.557 05:34:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.557 05:34:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.557 Malloc0 00:05:49.557 05:34:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.557 Malloc1 00:05:49.557 05:34:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.557 05:34:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.816 /dev/nbd0 00:05:49.816 05:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.816 05:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.816 1+0 records in 00:05:49.816 1+0 records out 00:05:49.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000118789 s, 34.5 MB/s 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.816 05:34:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:49.816 05:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.816 05:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.816 05:34:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.075 /dev/nbd1 00:05:50.075 05:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.075 05:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.075 1+0 records in 00:05:50.075 1+0 records out 00:05:50.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142101 s, 28.8 MB/s 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:50.075 05:34:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:50.075 05:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.075 05:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.075 05:34:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.075 05:34:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.075 05:34:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.334 { 00:05:50.334 "nbd_device": "/dev/nbd0", 00:05:50.334 "bdev_name": "Malloc0" 00:05:50.334 }, 00:05:50.334 { 00:05:50.334 "nbd_device": "/dev/nbd1", 00:05:50.334 "bdev_name": "Malloc1" 00:05:50.334 } 00:05:50.334 ]' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.334 { 00:05:50.334 "nbd_device": "/dev/nbd0", 00:05:50.334 "bdev_name": "Malloc0" 00:05:50.334 }, 00:05:50.334 { 00:05:50.334 "nbd_device": "/dev/nbd1", 00:05:50.334 "bdev_name": "Malloc1" 00:05:50.334 } 00:05:50.334 ]' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.334 /dev/nbd1' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.334 /dev/nbd1' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.334 256+0 records in 00:05:50.334 256+0 records out 00:05:50.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010113 s, 104 MB/s 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.334 256+0 records in 00:05:50.334 256+0 records out 00:05:50.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137605 s, 76.2 MB/s 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.334 256+0 records in 00:05:50.334 256+0 records out 00:05:50.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149263 s, 70.3 MB/s 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.334 05:34:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.593 05:34:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.851 05:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.110 05:34:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.110 05:34:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.368 05:34:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.368 [2024-12-16 05:34:25.186546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.368 [2024-12-16 05:34:25.222240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.368 [2024-12-16 05:34:25.222241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.626 [2024-12-16 05:34:25.263244] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.626 [2024-12-16 05:34:25.263285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.911 05:34:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.911 05:34:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.911 spdk_app_start Round 2 00:05:54.911 05:34:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3156939 /var/tmp/spdk-nbd.sock 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3156939 ']' 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.911 05:34:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:54.911 05:34:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.911 Malloc0 00:05:54.911 05:34:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.911 Malloc1 00:05:54.911 05:34:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.911 05:34:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.170 /dev/nbd0 00:05:55.170 05:34:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.170 05:34:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.170 1+0 records in 00:05:55.170 1+0 records out 00:05:55.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00012456 s, 32.9 MB/s 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.170 05:34:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.170 05:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.170 05:34:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.170 05:34:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.429 /dev/nbd1 00:05:55.429 05:34:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.429 05:34:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.429 1+0 records in 00:05:55.429 1+0 records out 00:05:55.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229881 s, 17.8 MB/s 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:55.429 05:34:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:55.429 05:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.429 05:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.429 05:34:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.429 05:34:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.429 05:34:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.689 { 00:05:55.689 "nbd_device": "/dev/nbd0", 00:05:55.689 "bdev_name": "Malloc0" 00:05:55.689 }, 00:05:55.689 { 00:05:55.689 "nbd_device": "/dev/nbd1", 00:05:55.689 "bdev_name": "Malloc1" 00:05:55.689 } 00:05:55.689 ]' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.689 { 00:05:55.689 "nbd_device": "/dev/nbd0", 00:05:55.689 "bdev_name": "Malloc0" 00:05:55.689 }, 00:05:55.689 { 00:05:55.689 "nbd_device": "/dev/nbd1", 00:05:55.689 "bdev_name": "Malloc1" 00:05:55.689 } 00:05:55.689 ]' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.689 /dev/nbd1' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.689 /dev/nbd1' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.689 256+0 records in 00:05:55.689 256+0 records out 00:05:55.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101734 s, 103 MB/s 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.689 256+0 records in 00:05:55.689 256+0 records out 00:05:55.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140151 s, 74.8 MB/s 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.689 256+0 records in 00:05:55.689 256+0 records out 00:05:55.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143457 s, 73.1 MB/s 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.689 05:34:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.948 05:34:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.207 05:34:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.207 05:34:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.207 05:34:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.466 05:34:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.725 [2024-12-16 05:34:30.434472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.725 [2024-12-16 05:34:30.470702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.725 [2024-12-16 05:34:30.470703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.725 [2024-12-16 05:34:30.511096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.725 [2024-12-16 05:34:30.511134] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.011 05:34:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3156939 /var/tmp/spdk-nbd.sock 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3156939 ']' 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.011 05:34:33 event.app_repeat -- event/event.sh@39 -- # killprocess 3156939 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3156939 ']' 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3156939 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3156939 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3156939' 00:06:00.011 killing process with pid 3156939 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3156939 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3156939 00:06:00.011 spdk_app_start is called in Round 0. 00:06:00.011 Shutdown signal received, stop current app iteration 00:06:00.011 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:00.011 spdk_app_start is called in Round 1. 00:06:00.011 Shutdown signal received, stop current app iteration 00:06:00.011 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:00.011 spdk_app_start is called in Round 2. 00:06:00.011 Shutdown signal received, stop current app iteration 00:06:00.011 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:00.011 spdk_app_start is called in Round 3. 00:06:00.011 Shutdown signal received, stop current app iteration 00:06:00.011 05:34:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.011 05:34:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:00.011 00:06:00.011 real 0m16.128s 00:06:00.011 user 0m35.228s 00:06:00.011 sys 0m2.511s 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.011 05:34:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.011 ************************************ 00:06:00.011 END TEST app_repeat 00:06:00.011 ************************************ 00:06:00.011 05:34:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.011 05:34:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.011 05:34:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.011 05:34:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.011 05:34:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.011 ************************************ 00:06:00.011 START TEST cpu_locks 00:06:00.011 ************************************ 00:06:00.011 05:34:33 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.011 * Looking for test storage... 00:06:00.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.011 05:34:33 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.011 05:34:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.011 05:34:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.271 05:34:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.271 --rc genhtml_branch_coverage=1 00:06:00.271 --rc genhtml_function_coverage=1 00:06:00.271 --rc genhtml_legend=1 00:06:00.271 --rc geninfo_all_blocks=1 00:06:00.271 --rc geninfo_unexecuted_blocks=1 00:06:00.271 00:06:00.271 ' 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.271 --rc genhtml_branch_coverage=1 00:06:00.271 --rc genhtml_function_coverage=1 00:06:00.271 --rc genhtml_legend=1 00:06:00.271 --rc geninfo_all_blocks=1 00:06:00.271 --rc geninfo_unexecuted_blocks=1 00:06:00.271 00:06:00.271 ' 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.271 --rc genhtml_branch_coverage=1 00:06:00.271 --rc genhtml_function_coverage=1 00:06:00.271 --rc genhtml_legend=1 00:06:00.271 --rc geninfo_all_blocks=1 00:06:00.271 --rc geninfo_unexecuted_blocks=1 00:06:00.271 00:06:00.271 ' 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.271 --rc genhtml_branch_coverage=1 00:06:00.271 --rc genhtml_function_coverage=1 00:06:00.271 --rc genhtml_legend=1 00:06:00.271 --rc geninfo_all_blocks=1 00:06:00.271 --rc geninfo_unexecuted_blocks=1 00:06:00.271 00:06:00.271 ' 00:06:00.271 05:34:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.271 05:34:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.271 05:34:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.271 05:34:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.271 05:34:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.271 ************************************ 00:06:00.271 START TEST default_locks 00:06:00.271 ************************************ 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3159863 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3159863 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3159863 ']' 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.271 05:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.271 [2024-12-16 05:34:33.992835] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:00.271 [2024-12-16 05:34:33.992882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159863 ] 00:06:00.271 [2024-12-16 05:34:34.048155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.271 [2024-12-16 05:34:34.086574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.530 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.530 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:00.530 05:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3159863 00:06:00.530 05:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3159863 00:06:00.530 05:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.098 lslocks: write error 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3159863 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3159863 ']' 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3159863 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3159863 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3159863' 00:06:01.098 killing process with pid 3159863 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3159863 00:06:01.098 05:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3159863 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3159863 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3159863 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3159863 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3159863 ']' 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3159863) - No such process 00:06:01.667 ERROR: process (pid: 3159863) is no longer running 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.667 00:06:01.667 real 0m1.320s 00:06:01.667 user 0m1.289s 00:06:01.667 sys 0m0.577s 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.667 05:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.667 ************************************ 00:06:01.667 END TEST default_locks 00:06:01.667 ************************************ 00:06:01.667 05:34:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.667 05:34:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.667 05:34:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.667 05:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.667 ************************************ 00:06:01.667 START TEST default_locks_via_rpc 00:06:01.667 ************************************ 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3160113 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3160113 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3160113 ']' 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.667 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.667 [2024-12-16 05:34:35.372464] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:01.667 [2024-12-16 05:34:35.372503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160113 ] 00:06:01.667 [2024-12-16 05:34:35.426890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.667 [2024-12-16 05:34:35.462602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3160113 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3160113 00:06:01.926 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3160113 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3160113 ']' 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3160113 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160113 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160113' 00:06:02.185 killing process with pid 3160113 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3160113 00:06:02.185 05:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3160113 00:06:02.444 00:06:02.444 real 0m0.835s 00:06:02.444 user 0m0.803s 00:06:02.444 sys 0m0.380s 00:06:02.444 05:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.444 05:34:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.444 ************************************ 00:06:02.444 END TEST default_locks_via_rpc 00:06:02.444 ************************************ 00:06:02.444 05:34:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.444 05:34:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.444 05:34:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.444 05:34:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.444 ************************************ 00:06:02.444 START TEST non_locking_app_on_locked_coremask 00:06:02.445 ************************************ 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3160363 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3160363 /var/tmp/spdk.sock 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3160363 ']' 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.445 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.445 [2024-12-16 05:34:36.278565] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.445 [2024-12-16 05:34:36.278606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160363 ] 00:06:02.704 [2024-12-16 05:34:36.333243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.704 [2024-12-16 05:34:36.373286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3160372 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3160372 /var/tmp/spdk2.sock 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3160372 ']' 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.963 05:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.963 [2024-12-16 05:34:36.598311] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.963 [2024-12-16 05:34:36.598358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160372 ] 00:06:02.963 [2024-12-16 05:34:36.666796] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.963 [2024-12-16 05:34:36.666817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.963 [2024-12-16 05:34:36.745346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3160363 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3160363 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.900 lslocks: write error 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3160363 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3160363 ']' 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3160363 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160363 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160363' 00:06:03.900 killing process with pid 3160363 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3160363 00:06:03.900 05:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3160363 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3160372 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3160372 ']' 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3160372 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160372 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160372' 00:06:04.837 killing process with pid 3160372 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3160372 00:06:04.837 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3160372 00:06:05.097 00:06:05.097 real 0m2.484s 00:06:05.097 user 0m2.593s 00:06:05.097 sys 0m0.802s 00:06:05.097 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.097 05:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.097 ************************************ 00:06:05.097 END TEST non_locking_app_on_locked_coremask 00:06:05.097 ************************************ 00:06:05.097 05:34:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.097 05:34:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.097 05:34:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.097 05:34:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.097 ************************************ 00:06:05.097 START TEST locking_app_on_unlocked_coremask 00:06:05.097 ************************************ 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3160848 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3160848 /var/tmp/spdk.sock 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3160848 ']' 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.097 05:34:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.097 [2024-12-16 05:34:38.832265] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.097 [2024-12-16 05:34:38.832309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160848 ] 00:06:05.097 [2024-12-16 05:34:38.887844] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.097 [2024-12-16 05:34:38.887876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.097 [2024-12-16 05:34:38.923421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3160853 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3160853 /var/tmp/spdk2.sock 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3160853 ']' 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.356 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.356 [2024-12-16 05:34:39.165648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.356 [2024-12-16 05:34:39.165696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160853 ] 00:06:05.615 [2024-12-16 05:34:39.240568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.615 [2024-12-16 05:34:39.314998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.182 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.182 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.182 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3160853 00:06:06.182 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3160853 00:06:06.182 05:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.122 lslocks: write error 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3160848 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3160848 ']' 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3160848 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160848 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160848' 00:06:07.122 killing process with pid 3160848 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3160848 00:06:07.122 05:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3160848 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3160853 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3160853 ']' 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3160853 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160853 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160853' 00:06:07.690 killing process with pid 3160853 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3160853 00:06:07.690 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3160853 00:06:07.949 00:06:07.949 real 0m2.893s 00:06:07.949 user 0m3.020s 00:06:07.949 sys 0m0.997s 00:06:07.949 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.949 05:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.949 ************************************ 00:06:07.949 END TEST locking_app_on_unlocked_coremask 00:06:07.949 ************************************ 00:06:07.949 05:34:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.950 05:34:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.950 05:34:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.950 05:34:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.950 ************************************ 00:06:07.950 START TEST locking_app_on_locked_coremask 00:06:07.950 ************************************ 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3161339 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3161339 /var/tmp/spdk.sock 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161339 ']' 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.950 05:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.950 [2024-12-16 05:34:41.790896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:07.950 [2024-12-16 05:34:41.790941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161339 ] 00:06:08.209 [2024-12-16 05:34:41.846386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.209 [2024-12-16 05:34:41.882983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3161344 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3161344 /var/tmp/spdk2.sock 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3161344 /var/tmp/spdk2.sock 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3161344 /var/tmp/spdk2.sock 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161344 ']' 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.468 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.468 [2024-12-16 05:34:42.124143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.468 [2024-12-16 05:34:42.124185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161344 ] 00:06:08.468 [2024-12-16 05:34:42.199995] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3161339 has claimed it. 00:06:08.468 [2024-12-16 05:34:42.200034] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3161344) - No such process 00:06:09.035 ERROR: process (pid: 3161344) is no longer running 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3161339 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3161339 00:06:09.035 05:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.603 lslocks: write error 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3161339 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3161339 ']' 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3161339 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3161339 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3161339' 00:06:09.603 killing process with pid 3161339 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3161339 00:06:09.603 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3161339 00:06:09.862 00:06:09.862 real 0m1.881s 00:06:09.862 user 0m2.026s 00:06:09.862 sys 0m0.666s 00:06:09.862 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.862 05:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.862 ************************************ 00:06:09.862 END TEST locking_app_on_locked_coremask 00:06:09.862 ************************************ 00:06:09.862 05:34:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:09.862 05:34:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.862 05:34:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.862 05:34:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.862 ************************************ 00:06:09.862 START TEST locking_overlapped_coremask 00:06:09.862 ************************************ 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3161718 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3161718 /var/tmp/spdk.sock 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161718 ']' 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.862 05:34:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.120 [2024-12-16 05:34:43.743591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.120 [2024-12-16 05:34:43.743635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161718 ] 00:06:10.120 [2024-12-16 05:34:43.798864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.120 [2024-12-16 05:34:43.840381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.120 [2024-12-16 05:34:43.840484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.120 [2024-12-16 05:34:43.840485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3161818 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3161818 /var/tmp/spdk2.sock 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3161818 /var/tmp/spdk2.sock 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3161818 /var/tmp/spdk2.sock 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161818 ']' 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.379 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.379 [2024-12-16 05:34:44.082394] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.379 [2024-12-16 05:34:44.082442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161818 ] 00:06:10.379 [2024-12-16 05:34:44.159069] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3161718 has claimed it. 00:06:10.379 [2024-12-16 05:34:44.159108] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3161818) - No such process 00:06:10.947 ERROR: process (pid: 3161818) is no longer running 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3161718 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3161718 ']' 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3161718 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3161718 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3161718' 00:06:10.947 killing process with pid 3161718 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3161718 00:06:10.947 05:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3161718 00:06:11.515 00:06:11.515 real 0m1.393s 00:06:11.515 user 0m3.853s 00:06:11.515 sys 0m0.380s 00:06:11.515 05:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.515 05:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 ************************************ 00:06:11.516 END TEST locking_overlapped_coremask 00:06:11.516 ************************************ 00:06:11.516 05:34:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.516 05:34:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.516 05:34:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.516 05:34:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 ************************************ 00:06:11.516 START TEST locking_overlapped_coremask_via_rpc 00:06:11.516 ************************************ 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3162060 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3162060 /var/tmp/spdk.sock 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162060 ']' 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.516 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.516 [2024-12-16 05:34:45.201122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.516 [2024-12-16 05:34:45.201166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162060 ] 00:06:11.516 [2024-12-16 05:34:45.257718] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.516 [2024-12-16 05:34:45.257744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.516 [2024-12-16 05:34:45.296371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.516 [2024-12-16 05:34:45.296470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.516 [2024-12-16 05:34:45.296472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3162075 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3162075 /var/tmp/spdk2.sock 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162075 ']' 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.774 05:34:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.774 [2024-12-16 05:34:45.544095] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.774 [2024-12-16 05:34:45.544144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162075 ] 00:06:11.774 [2024-12-16 05:34:45.622717] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.774 [2024-12-16 05:34:45.622748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.032 [2024-12-16 05:34:45.702962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.032 [2024-12-16 05:34:45.703082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.032 [2024-12-16 05:34:45.703082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:12.599 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.600 [2024-12-16 05:34:46.403914] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3162060 has claimed it. 00:06:12.600 request: 00:06:12.600 { 00:06:12.600 "method": "framework_enable_cpumask_locks", 00:06:12.600 "req_id": 1 00:06:12.600 } 00:06:12.600 Got JSON-RPC error response 00:06:12.600 response: 00:06:12.600 { 00:06:12.600 "code": -32603, 00:06:12.600 "message": "Failed to claim CPU core: 2" 00:06:12.600 } 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3162060 /var/tmp/spdk.sock 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162060 ']' 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.600 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.858 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.858 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.858 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3162075 /var/tmp/spdk2.sock 00:06:12.858 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162075 ']' 00:06:12.858 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.858 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.858 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.859 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.859 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.118 00:06:13.118 real 0m1.660s 00:06:13.118 user 0m0.814s 00:06:13.118 sys 0m0.130s 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.118 05:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.118 ************************************ 00:06:13.118 END TEST locking_overlapped_coremask_via_rpc 00:06:13.118 ************************************ 00:06:13.118 05:34:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.118 05:34:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3162060 ]] 00:06:13.118 05:34:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3162060 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162060 ']' 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162060 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162060 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162060' 00:06:13.118 killing process with pid 3162060 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3162060 00:06:13.118 05:34:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3162060 00:06:13.377 05:34:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3162075 ]] 00:06:13.377 05:34:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3162075 00:06:13.377 05:34:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162075 ']' 00:06:13.377 05:34:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162075 00:06:13.377 05:34:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:13.377 05:34:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.377 05:34:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162075 00:06:13.636 05:34:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:13.636 05:34:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:13.636 05:34:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162075' 00:06:13.636 killing process with pid 3162075 00:06:13.636 05:34:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3162075 00:06:13.636 05:34:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3162075 00:06:13.895 05:34:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.895 05:34:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:13.895 05:34:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3162060 ]] 00:06:13.895 05:34:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3162060 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162060 ']' 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162060 00:06:13.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3162060) - No such process 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3162060 is not found' 00:06:13.895 Process with pid 3162060 is not found 00:06:13.895 05:34:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3162075 ]] 00:06:13.895 05:34:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3162075 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162075 ']' 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162075 00:06:13.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3162075) - No such process 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3162075 is not found' 00:06:13.895 Process with pid 3162075 is not found 00:06:13.895 05:34:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.895 00:06:13.895 real 0m13.855s 00:06:13.895 user 0m24.079s 00:06:13.895 sys 0m4.909s 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.895 05:34:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.895 ************************************ 00:06:13.895 END TEST cpu_locks 00:06:13.895 ************************************ 00:06:13.895 00:06:13.895 real 0m38.429s 00:06:13.895 user 1m13.403s 00:06:13.895 sys 0m8.365s 00:06:13.895 05:34:47 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.895 05:34:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.895 ************************************ 00:06:13.895 END TEST event 00:06:13.895 ************************************ 00:06:13.895 05:34:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.895 05:34:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.895 05:34:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.895 05:34:47 -- common/autotest_common.sh@10 -- # set +x 00:06:13.895 ************************************ 00:06:13.895 START TEST thread 00:06:13.895 ************************************ 00:06:13.895 05:34:47 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.154 * Looking for test storage... 00:06:14.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:14.154 05:34:47 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.154 05:34:47 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.155 05:34:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.155 05:34:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.155 05:34:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.155 05:34:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.155 05:34:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.155 05:34:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.155 05:34:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.155 05:34:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.155 05:34:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.155 05:34:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.155 05:34:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.155 05:34:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:14.155 05:34:47 thread -- scripts/common.sh@345 -- # : 1 00:06:14.155 05:34:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.155 05:34:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.155 05:34:47 thread -- scripts/common.sh@365 -- # decimal 1 00:06:14.155 05:34:47 thread -- scripts/common.sh@353 -- # local d=1 00:06:14.155 05:34:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.155 05:34:47 thread -- scripts/common.sh@355 -- # echo 1 00:06:14.155 05:34:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.155 05:34:47 thread -- scripts/common.sh@366 -- # decimal 2 00:06:14.155 05:34:47 thread -- scripts/common.sh@353 -- # local d=2 00:06:14.155 05:34:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.155 05:34:47 thread -- scripts/common.sh@355 -- # echo 2 00:06:14.155 05:34:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.155 05:34:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.155 05:34:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.155 05:34:47 thread -- scripts/common.sh@368 -- # return 0 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.155 --rc genhtml_branch_coverage=1 00:06:14.155 --rc genhtml_function_coverage=1 00:06:14.155 --rc genhtml_legend=1 00:06:14.155 --rc geninfo_all_blocks=1 00:06:14.155 --rc geninfo_unexecuted_blocks=1 00:06:14.155 00:06:14.155 ' 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.155 --rc genhtml_branch_coverage=1 00:06:14.155 --rc genhtml_function_coverage=1 00:06:14.155 --rc genhtml_legend=1 00:06:14.155 --rc geninfo_all_blocks=1 00:06:14.155 --rc geninfo_unexecuted_blocks=1 00:06:14.155 00:06:14.155 ' 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.155 --rc genhtml_branch_coverage=1 00:06:14.155 --rc genhtml_function_coverage=1 00:06:14.155 --rc genhtml_legend=1 00:06:14.155 --rc geninfo_all_blocks=1 00:06:14.155 --rc geninfo_unexecuted_blocks=1 00:06:14.155 00:06:14.155 ' 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.155 --rc genhtml_branch_coverage=1 00:06:14.155 --rc genhtml_function_coverage=1 00:06:14.155 --rc genhtml_legend=1 00:06:14.155 --rc geninfo_all_blocks=1 00:06:14.155 --rc geninfo_unexecuted_blocks=1 00:06:14.155 00:06:14.155 ' 00:06:14.155 05:34:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.155 05:34:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.155 ************************************ 00:06:14.155 START TEST thread_poller_perf 00:06:14.155 ************************************ 00:06:14.155 05:34:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.155 [2024-12-16 05:34:47.915529] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.155 [2024-12-16 05:34:47.915567] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162631 ] 00:06:14.155 [2024-12-16 05:34:47.968466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.155 [2024-12-16 05:34:48.007056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.155 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.531 [2024-12-16T04:34:49.387Z] ====================================== 00:06:15.531 [2024-12-16T04:34:49.387Z] busy:2108833708 (cyc) 00:06:15.531 [2024-12-16T04:34:49.387Z] total_run_count: 424000 00:06:15.531 [2024-12-16T04:34:49.387Z] tsc_hz: 2100000000 (cyc) 00:06:15.531 [2024-12-16T04:34:49.387Z] ====================================== 00:06:15.531 [2024-12-16T04:34:49.387Z] poller_cost: 4973 (cyc), 2368 (nsec) 00:06:15.531 00:06:15.531 real 0m1.167s 00:06:15.531 user 0m1.096s 00:06:15.531 sys 0m0.068s 00:06:15.531 05:34:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.531 05:34:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.531 ************************************ 00:06:15.531 END TEST thread_poller_perf 00:06:15.531 ************************************ 00:06:15.531 05:34:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.531 05:34:49 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:15.531 05:34:49 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.531 05:34:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.531 ************************************ 00:06:15.531 START TEST thread_poller_perf 00:06:15.531 ************************************ 00:06:15.532 05:34:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:15.532 [2024-12-16 05:34:49.160571] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:15.532 [2024-12-16 05:34:49.160644] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162820 ] 00:06:15.532 [2024-12-16 05:34:49.222595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.532 [2024-12-16 05:34:49.262191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.532 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:16.468 [2024-12-16T04:34:50.324Z] ====================================== 00:06:16.468 [2024-12-16T04:34:50.324Z] busy:2101573082 (cyc) 00:06:16.468 [2024-12-16T04:34:50.324Z] total_run_count: 5328000 00:06:16.468 [2024-12-16T04:34:50.324Z] tsc_hz: 2100000000 (cyc) 00:06:16.468 [2024-12-16T04:34:50.324Z] ====================================== 00:06:16.468 [2024-12-16T04:34:50.324Z] poller_cost: 394 (cyc), 187 (nsec) 00:06:16.727 00:06:16.727 real 0m1.187s 00:06:16.727 user 0m1.104s 00:06:16.727 sys 0m0.079s 00:06:16.727 05:34:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.727 05:34:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 ************************************ 00:06:16.727 END TEST thread_poller_perf 00:06:16.727 ************************************ 00:06:16.727 05:34:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:16.727 00:06:16.727 real 0m2.650s 00:06:16.727 user 0m2.364s 00:06:16.727 sys 0m0.296s 00:06:16.727 05:34:50 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.727 05:34:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 ************************************ 00:06:16.727 END TEST thread 00:06:16.727 ************************************ 00:06:16.727 05:34:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:16.727 05:34:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:16.727 05:34:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.727 05:34:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.727 05:34:50 -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 ************************************ 00:06:16.727 START TEST app_cmdline 00:06:16.727 ************************************ 00:06:16.727 05:34:50 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:16.727 * Looking for test storage... 00:06:16.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:16.727 05:34:50 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.727 05:34:50 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.727 05:34:50 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.727 05:34:50 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.727 05:34:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.987 05:34:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.987 --rc genhtml_branch_coverage=1 00:06:16.987 --rc genhtml_function_coverage=1 00:06:16.987 --rc genhtml_legend=1 00:06:16.987 --rc geninfo_all_blocks=1 00:06:16.987 --rc geninfo_unexecuted_blocks=1 00:06:16.987 00:06:16.987 ' 00:06:16.987 05:34:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:16.987 05:34:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3163159 00:06:16.987 05:34:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3163159 00:06:16.987 05:34:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3163159 ']' 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.987 05:34:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.987 [2024-12-16 05:34:50.643938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:16.987 [2024-12-16 05:34:50.643987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163159 ] 00:06:16.987 [2024-12-16 05:34:50.700522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.987 [2024-12-16 05:34:50.738627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.246 05:34:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.246 05:34:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:17.246 05:34:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:17.505 { 00:06:17.505 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:17.505 "fields": { 00:06:17.505 "major": 24, 00:06:17.505 "minor": 9, 00:06:17.505 "patch": 1, 00:06:17.505 "suffix": "-pre", 00:06:17.505 "commit": "b18e1bd62" 00:06:17.505 } 00:06:17.505 } 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.505 request: 00:06:17.505 { 00:06:17.505 "method": "env_dpdk_get_mem_stats", 00:06:17.505 "req_id": 1 00:06:17.505 } 00:06:17.505 Got JSON-RPC error response 00:06:17.505 response: 00:06:17.505 { 00:06:17.505 "code": -32601, 00:06:17.505 "message": "Method not found" 00:06:17.505 } 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.505 05:34:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3163159 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3163159 ']' 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3163159 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.505 05:34:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3163159 00:06:17.764 05:34:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.764 05:34:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.764 05:34:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3163159' 00:06:17.764 killing process with pid 3163159 00:06:17.764 05:34:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 3163159 00:06:17.764 05:34:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 3163159 00:06:18.023 00:06:18.023 real 0m1.283s 00:06:18.023 user 0m1.476s 00:06:18.023 sys 0m0.429s 00:06:18.023 05:34:51 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.023 05:34:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.023 ************************************ 00:06:18.023 END TEST app_cmdline 00:06:18.023 ************************************ 00:06:18.023 05:34:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.023 05:34:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.023 05:34:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.023 05:34:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.023 ************************************ 00:06:18.023 START TEST version 00:06:18.023 ************************************ 00:06:18.023 05:34:51 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.023 * Looking for test storage... 00:06:18.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.023 05:34:51 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.023 05:34:51 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.023 05:34:51 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.283 05:34:51 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.283 05:34:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.283 05:34:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.283 05:34:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.283 05:34:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.283 05:34:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.283 05:34:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.283 05:34:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.283 05:34:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.283 05:34:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.283 05:34:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.283 05:34:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.283 05:34:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:18.283 05:34:51 version -- scripts/common.sh@345 -- # : 1 00:06:18.283 05:34:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.283 05:34:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.283 05:34:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:18.283 05:34:51 version -- scripts/common.sh@353 -- # local d=1 00:06:18.283 05:34:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.283 05:34:51 version -- scripts/common.sh@355 -- # echo 1 00:06:18.283 05:34:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.283 05:34:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:18.283 05:34:51 version -- scripts/common.sh@353 -- # local d=2 00:06:18.283 05:34:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.283 05:34:51 version -- scripts/common.sh@355 -- # echo 2 00:06:18.283 05:34:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.283 05:34:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.283 05:34:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.283 05:34:51 version -- scripts/common.sh@368 -- # return 0 00:06:18.283 05:34:51 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.283 05:34:51 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.283 --rc genhtml_branch_coverage=1 00:06:18.283 --rc genhtml_function_coverage=1 00:06:18.283 --rc genhtml_legend=1 00:06:18.283 --rc geninfo_all_blocks=1 00:06:18.283 --rc geninfo_unexecuted_blocks=1 00:06:18.283 00:06:18.283 ' 00:06:18.283 05:34:51 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.283 --rc genhtml_branch_coverage=1 00:06:18.283 --rc genhtml_function_coverage=1 00:06:18.283 --rc genhtml_legend=1 00:06:18.283 --rc geninfo_all_blocks=1 00:06:18.283 --rc geninfo_unexecuted_blocks=1 00:06:18.283 00:06:18.283 ' 00:06:18.283 05:34:51 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.283 --rc genhtml_branch_coverage=1 00:06:18.283 --rc genhtml_function_coverage=1 00:06:18.283 --rc genhtml_legend=1 00:06:18.283 --rc geninfo_all_blocks=1 00:06:18.283 --rc geninfo_unexecuted_blocks=1 00:06:18.283 00:06:18.283 ' 00:06:18.283 05:34:51 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.283 --rc genhtml_branch_coverage=1 00:06:18.283 --rc genhtml_function_coverage=1 00:06:18.283 --rc genhtml_legend=1 00:06:18.283 --rc geninfo_all_blocks=1 00:06:18.283 --rc geninfo_unexecuted_blocks=1 00:06:18.283 00:06:18.283 ' 00:06:18.283 05:34:51 version -- app/version.sh@17 -- # get_header_version major 00:06:18.283 05:34:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # cut -f2 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.283 05:34:51 version -- app/version.sh@17 -- # major=24 00:06:18.283 05:34:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:18.283 05:34:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # cut -f2 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.283 05:34:51 version -- app/version.sh@18 -- # minor=9 00:06:18.283 05:34:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:18.283 05:34:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # cut -f2 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.283 05:34:51 version -- app/version.sh@19 -- # patch=1 00:06:18.283 05:34:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:18.283 05:34:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # cut -f2 00:06:18.283 05:34:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.283 05:34:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:18.283 05:34:51 version -- app/version.sh@22 -- # version=24.9 00:06:18.283 05:34:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:18.283 05:34:51 version -- app/version.sh@25 -- # version=24.9.1 00:06:18.283 05:34:51 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:18.283 05:34:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:18.283 05:34:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:18.283 05:34:52 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:18.283 05:34:52 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:18.283 00:06:18.283 real 0m0.233s 00:06:18.283 user 0m0.136s 00:06:18.283 sys 0m0.139s 00:06:18.283 05:34:52 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.283 05:34:52 version -- common/autotest_common.sh@10 -- # set +x 00:06:18.283 ************************************ 00:06:18.283 END TEST version 00:06:18.283 ************************************ 00:06:18.283 05:34:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:18.283 05:34:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:18.283 05:34:52 -- spdk/autotest.sh@194 -- # uname -s 00:06:18.283 05:34:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:18.283 05:34:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:18.283 05:34:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:18.283 05:34:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:18.283 05:34:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:18.283 05:34:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:18.283 05:34:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.283 05:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.283 05:34:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:18.283 05:34:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:18.284 05:34:52 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:18.284 05:34:52 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:18.284 05:34:52 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:18.284 05:34:52 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:18.284 05:34:52 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.284 05:34:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:18.284 05:34:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.284 05:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.284 ************************************ 00:06:18.284 START TEST nvmf_tcp 00:06:18.284 ************************************ 00:06:18.284 05:34:52 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:18.543 * Looking for test storage... 00:06:18.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.543 05:34:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.543 --rc genhtml_branch_coverage=1 00:06:18.543 --rc genhtml_function_coverage=1 00:06:18.543 --rc genhtml_legend=1 00:06:18.543 --rc geninfo_all_blocks=1 00:06:18.543 --rc geninfo_unexecuted_blocks=1 00:06:18.543 00:06:18.543 ' 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.543 --rc genhtml_branch_coverage=1 00:06:18.543 --rc genhtml_function_coverage=1 00:06:18.543 --rc genhtml_legend=1 00:06:18.543 --rc geninfo_all_blocks=1 00:06:18.543 --rc geninfo_unexecuted_blocks=1 00:06:18.543 00:06:18.543 ' 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.543 --rc genhtml_branch_coverage=1 00:06:18.543 --rc genhtml_function_coverage=1 00:06:18.543 --rc genhtml_legend=1 00:06:18.543 --rc geninfo_all_blocks=1 00:06:18.543 --rc geninfo_unexecuted_blocks=1 00:06:18.543 00:06:18.543 ' 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.543 --rc genhtml_branch_coverage=1 00:06:18.543 --rc genhtml_function_coverage=1 00:06:18.543 --rc genhtml_legend=1 00:06:18.543 --rc geninfo_all_blocks=1 00:06:18.543 --rc geninfo_unexecuted_blocks=1 00:06:18.543 00:06:18.543 ' 00:06:18.543 05:34:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:18.543 05:34:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:18.543 05:34:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.543 05:34:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.543 ************************************ 00:06:18.543 START TEST nvmf_target_core 00:06:18.543 ************************************ 00:06:18.543 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:18.543 * Looking for test storage... 00:06:18.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:18.543 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.543 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.543 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.803 --rc genhtml_branch_coverage=1 00:06:18.803 --rc genhtml_function_coverage=1 00:06:18.803 --rc genhtml_legend=1 00:06:18.803 --rc geninfo_all_blocks=1 00:06:18.803 --rc geninfo_unexecuted_blocks=1 00:06:18.803 00:06:18.803 ' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.803 --rc genhtml_branch_coverage=1 00:06:18.803 --rc genhtml_function_coverage=1 00:06:18.803 --rc genhtml_legend=1 00:06:18.803 --rc geninfo_all_blocks=1 00:06:18.803 --rc geninfo_unexecuted_blocks=1 00:06:18.803 00:06:18.803 ' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.803 --rc genhtml_branch_coverage=1 00:06:18.803 --rc genhtml_function_coverage=1 00:06:18.803 --rc genhtml_legend=1 00:06:18.803 --rc geninfo_all_blocks=1 00:06:18.803 --rc geninfo_unexecuted_blocks=1 00:06:18.803 00:06:18.803 ' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.803 --rc genhtml_branch_coverage=1 00:06:18.803 --rc genhtml_function_coverage=1 00:06:18.803 --rc genhtml_legend=1 00:06:18.803 --rc geninfo_all_blocks=1 00:06:18.803 --rc geninfo_unexecuted_blocks=1 00:06:18.803 00:06:18.803 ' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:18.803 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:18.804 ************************************ 00:06:18.804 START TEST nvmf_abort 00:06:18.804 ************************************ 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:18.804 * Looking for test storage... 00:06:18.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:18.804 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.064 --rc genhtml_branch_coverage=1 00:06:19.064 --rc genhtml_function_coverage=1 00:06:19.064 --rc genhtml_legend=1 00:06:19.064 --rc geninfo_all_blocks=1 00:06:19.064 --rc geninfo_unexecuted_blocks=1 00:06:19.064 00:06:19.064 ' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.064 --rc genhtml_branch_coverage=1 00:06:19.064 --rc genhtml_function_coverage=1 00:06:19.064 --rc genhtml_legend=1 00:06:19.064 --rc geninfo_all_blocks=1 00:06:19.064 --rc geninfo_unexecuted_blocks=1 00:06:19.064 00:06:19.064 ' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.064 --rc genhtml_branch_coverage=1 00:06:19.064 --rc genhtml_function_coverage=1 00:06:19.064 --rc genhtml_legend=1 00:06:19.064 --rc geninfo_all_blocks=1 00:06:19.064 --rc geninfo_unexecuted_blocks=1 00:06:19.064 00:06:19.064 ' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.064 --rc genhtml_branch_coverage=1 00:06:19.064 --rc genhtml_function_coverage=1 00:06:19.064 --rc genhtml_legend=1 00:06:19.064 --rc geninfo_all_blocks=1 00:06:19.064 --rc geninfo_unexecuted_blocks=1 00:06:19.064 00:06:19.064 ' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:19.064 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.065 05:34:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.520 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:24.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:24.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:24.521 Found net devices under 0000:af:00.0: cvl_0_0 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:24.521 Found net devices under 0000:af:00.1: cvl_0_1 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.521 05:34:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:06:24.521 00:06:24.521 --- 10.0.0.2 ping statistics --- 00:06:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.521 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:06:24.521 00:06:24.521 --- 10.0.0.1 ping statistics --- 00:06:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.521 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3166575 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3166575 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3166575 ']' 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.521 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.521 [2024-12-16 05:34:58.207791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:24.521 [2024-12-16 05:34:58.207834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.522 [2024-12-16 05:34:58.267910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.522 [2024-12-16 05:34:58.308843] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.522 [2024-12-16 05:34:58.308888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.522 [2024-12-16 05:34:58.308898] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.522 [2024-12-16 05:34:58.308905] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.522 [2024-12-16 05:34:58.308911] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.522 [2024-12-16 05:34:58.309015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.522 [2024-12-16 05:34:58.309106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.522 [2024-12-16 05:34:58.309110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 [2024-12-16 05:34:58.439732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 Malloc0 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 Delay0 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 [2024-12-16 05:34:58.513312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.781 05:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:25.041 [2024-12-16 05:34:58.672015] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:26.944 Initializing NVMe Controllers 00:06:26.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:26.944 controller IO queue size 128 less than required 00:06:26.944 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:26.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:26.944 Initialization complete. Launching workers. 00:06:26.944 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37724 00:06:26.944 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37785, failed to submit 62 00:06:26.944 success 37728, unsuccessful 57, failed 0 00:06:26.944 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:26.945 rmmod nvme_tcp 00:06:26.945 rmmod nvme_fabrics 00:06:26.945 rmmod nvme_keyring 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3166575 ']' 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3166575 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3166575 ']' 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3166575 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.945 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3166575 00:06:27.203 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:27.204 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:27.204 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3166575' 00:06:27.204 killing process with pid 3166575 00:06:27.204 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3166575 00:06:27.204 05:35:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3166575 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.204 05:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:29.740 00:06:29.740 real 0m10.612s 00:06:29.740 user 0m11.304s 00:06:29.740 sys 0m4.991s 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.740 ************************************ 00:06:29.740 END TEST nvmf_abort 00:06:29.740 ************************************ 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.740 ************************************ 00:06:29.740 START TEST nvmf_ns_hotplug_stress 00:06:29.740 ************************************ 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:29.740 * Looking for test storage... 00:06:29.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:29.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.740 --rc genhtml_branch_coverage=1 00:06:29.740 --rc genhtml_function_coverage=1 00:06:29.740 --rc genhtml_legend=1 00:06:29.740 --rc geninfo_all_blocks=1 00:06:29.740 --rc geninfo_unexecuted_blocks=1 00:06:29.740 00:06:29.740 ' 00:06:29.740 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:29.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.740 --rc genhtml_branch_coverage=1 00:06:29.741 --rc genhtml_function_coverage=1 00:06:29.741 --rc genhtml_legend=1 00:06:29.741 --rc geninfo_all_blocks=1 00:06:29.741 --rc geninfo_unexecuted_blocks=1 00:06:29.741 00:06:29.741 ' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:29.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.741 --rc genhtml_branch_coverage=1 00:06:29.741 --rc genhtml_function_coverage=1 00:06:29.741 --rc genhtml_legend=1 00:06:29.741 --rc geninfo_all_blocks=1 00:06:29.741 --rc geninfo_unexecuted_blocks=1 00:06:29.741 00:06:29.741 ' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:29.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.741 --rc genhtml_branch_coverage=1 00:06:29.741 --rc genhtml_function_coverage=1 00:06:29.741 --rc genhtml_legend=1 00:06:29.741 --rc geninfo_all_blocks=1 00:06:29.741 --rc geninfo_unexecuted_blocks=1 00:06:29.741 00:06:29.741 ' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:29.741 05:35:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.016 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:35.017 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:35.017 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:35.017 Found net devices under 0000:af:00.0: cvl_0_0 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:35.017 Found net devices under 0000:af:00.1: cvl_0_1 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:06:35.017 00:06:35.017 --- 10.0.0.2 ping statistics --- 00:06:35.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.017 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:06:35.017 00:06:35.017 --- 10.0.0.1 ping statistics --- 00:06:35.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.017 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.017 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3170513 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3170513 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3170513 ']' 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.018 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.018 [2024-12-16 05:35:08.795610] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.018 [2024-12-16 05:35:08.795652] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.018 [2024-12-16 05:35:08.853470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.277 [2024-12-16 05:35:08.893476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.277 [2024-12-16 05:35:08.893515] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.277 [2024-12-16 05:35:08.893525] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.277 [2024-12-16 05:35:08.893533] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.277 [2024-12-16 05:35:08.893539] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.277 [2024-12-16 05:35:08.893647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.277 [2024-12-16 05:35:08.893737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.277 [2024-12-16 05:35:08.893739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.277 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.277 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:35.277 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:35.277 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.277 05:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.277 05:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.277 05:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:35.277 05:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:35.536 [2024-12-16 05:35:09.192878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.536 05:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:35.794 05:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.794 [2024-12-16 05:35:09.610472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.794 05:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:36.053 05:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:36.311 Malloc0 00:06:36.311 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:36.570 Delay0 00:06:36.570 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.570 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:36.829 NULL1 00:06:36.829 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:37.087 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3170982 00:06:37.087 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:37.087 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:37.087 05:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.464 Read completed with error (sct=0, sc=11) 00:06:38.464 05:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.465 05:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:38.465 05:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:38.723 true 00:06:38.723 05:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:38.723 05:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.659 05:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.659 05:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:39.659 05:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:39.918 true 00:06:39.918 05:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:39.918 05:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.176 05:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.435 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:40.435 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:40.435 true 00:06:40.435 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:40.435 05:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.809 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.809 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:41.809 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:42.068 true 00:06:42.068 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:42.068 05:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.003 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.003 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:43.003 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:43.261 true 00:06:43.261 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:43.261 05:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.520 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.520 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:43.520 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:43.779 true 00:06:43.779 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:43.779 05:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.155 05:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.155 05:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:45.155 05:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:45.413 true 00:06:45.413 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:45.413 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.671 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.671 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:45.671 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:45.929 true 00:06:45.929 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:45.929 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.188 05:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.188 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:46.188 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:46.446 true 00:06:46.446 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:46.446 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.705 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.963 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:46.963 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:47.222 true 00:06:47.222 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:47.222 05:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.158 05:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.416 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:48.416 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:48.675 true 00:06:48.675 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:48.675 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.675 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.933 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:48.933 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:49.192 true 00:06:49.192 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:49.192 05:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.128 05:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.387 05:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:50.387 05:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:50.645 true 00:06:50.645 05:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:50.645 05:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.581 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.581 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:51.581 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:51.839 true 00:06:51.839 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:51.839 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.098 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.356 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:52.356 05:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:52.356 true 00:06:52.356 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:52.356 05:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.730 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.730 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:53.730 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:53.730 true 00:06:53.989 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:53.989 05:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.925 05:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.925 05:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:54.925 05:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:55.183 true 00:06:55.183 05:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:55.183 05:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.183 05:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.442 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:55.442 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:55.700 true 00:06:55.700 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:55.700 05:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.636 05:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.894 05:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:56.894 05:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:57.153 true 00:06:57.153 05:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:57.153 05:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.087 05:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.087 05:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:58.087 05:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:58.345 true 00:06:58.345 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:58.345 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.603 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.862 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:58.862 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:58.862 true 00:06:58.862 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:06:58.862 05:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.239 05:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.239 05:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:00.239 05:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:00.498 true 00:07:00.498 05:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:00.498 05:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.498 05:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.757 05:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:00.757 05:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:01.015 true 00:07:01.015 05:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:01.015 05:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.391 05:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.391 05:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:02.391 05:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:02.649 true 00:07:02.649 05:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:02.649 05:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.584 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.584 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:03.584 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:03.843 true 00:07:03.843 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:03.843 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.101 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.101 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:04.101 05:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:04.360 true 00:07:04.360 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:04.360 05:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.737 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.737 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:05.737 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:05.995 true 00:07:05.995 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:05.995 05:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.930 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.930 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:06.930 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:07.189 true 00:07:07.189 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:07.189 05:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.448 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.448 Initializing NVMe Controllers 00:07:07.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:07.448 Controller IO queue size 128, less than required. 00:07:07.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:07.448 Controller IO queue size 128, less than required. 00:07:07.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:07.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:07.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:07.448 Initialization complete. Launching workers. 00:07:07.448 ======================================================== 00:07:07.448 Latency(us) 00:07:07.448 Device Information : IOPS MiB/s Average min max 00:07:07.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1634.33 0.80 47919.31 2621.42 1130778.60 00:07:07.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15947.51 7.79 8026.63 1580.22 369894.31 00:07:07.448 ======================================================== 00:07:07.448 Total : 17581.84 8.58 11734.88 1580.22 1130778.60 00:07:07.448 00:07:07.448 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:07.448 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:07.706 true 00:07:07.706 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3170982 00:07:07.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3170982) - No such process 00:07:07.706 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3170982 00:07:07.706 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.965 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.225 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:08.225 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:08.225 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:08.225 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.225 05:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:08.225 null0 00:07:08.225 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.225 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.225 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:08.484 null1 00:07:08.484 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.484 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.484 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:08.746 null2 00:07:08.746 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.746 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.746 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:08.746 null3 00:07:09.064 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.064 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.064 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:09.064 null4 00:07:09.064 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.064 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.064 05:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:09.366 null5 00:07:09.366 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.366 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.366 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:09.366 null6 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:09.626 null7 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.626 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3176448 3176450 3176451 3176453 3176455 3176457 3176459 3176461 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.627 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.886 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.145 05:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.405 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.664 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.923 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.924 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.183 05:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.446 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.706 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.965 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.225 05:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.485 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.744 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.003 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.004 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.263 05:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.523 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.782 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.782 rmmod nvme_tcp 00:07:13.782 rmmod nvme_fabrics 00:07:13.782 rmmod nvme_keyring 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3170513 ']' 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3170513 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3170513 ']' 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3170513 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3170513 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3170513' 00:07:14.041 killing process with pid 3170513 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3170513 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3170513 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:14.041 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:14.042 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:14.042 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:14.042 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:07:14.042 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:14.042 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:07:14.313 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:14.313 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:14.313 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.313 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.313 05:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.219 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:16.219 00:07:16.219 real 0m46.791s 00:07:16.219 user 3m12.625s 00:07:16.219 sys 0m14.618s 00:07:16.219 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.219 05:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.219 ************************************ 00:07:16.219 END TEST nvmf_ns_hotplug_stress 00:07:16.219 ************************************ 00:07:16.219 05:35:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:16.219 05:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:16.219 05:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.219 05:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.219 ************************************ 00:07:16.219 START TEST nvmf_delete_subsystem 00:07:16.219 ************************************ 00:07:16.219 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:16.478 * Looking for test storage... 00:07:16.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.478 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.479 --rc genhtml_branch_coverage=1 00:07:16.479 --rc genhtml_function_coverage=1 00:07:16.479 --rc genhtml_legend=1 00:07:16.479 --rc geninfo_all_blocks=1 00:07:16.479 --rc geninfo_unexecuted_blocks=1 00:07:16.479 00:07:16.479 ' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.479 --rc genhtml_branch_coverage=1 00:07:16.479 --rc genhtml_function_coverage=1 00:07:16.479 --rc genhtml_legend=1 00:07:16.479 --rc geninfo_all_blocks=1 00:07:16.479 --rc geninfo_unexecuted_blocks=1 00:07:16.479 00:07:16.479 ' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.479 --rc genhtml_branch_coverage=1 00:07:16.479 --rc genhtml_function_coverage=1 00:07:16.479 --rc genhtml_legend=1 00:07:16.479 --rc geninfo_all_blocks=1 00:07:16.479 --rc geninfo_unexecuted_blocks=1 00:07:16.479 00:07:16.479 ' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.479 --rc genhtml_branch_coverage=1 00:07:16.479 --rc genhtml_function_coverage=1 00:07:16.479 --rc genhtml_legend=1 00:07:16.479 --rc geninfo_all_blocks=1 00:07:16.479 --rc geninfo_unexecuted_blocks=1 00:07:16.479 00:07:16.479 ' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.479 05:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:23.050 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:23.051 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:23.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:23.051 Found net devices under 0000:af:00.0: cvl_0_0 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:23.051 Found net devices under 0000:af:00.1: cvl_0_1 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.051 05:35:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:07:23.051 00:07:23.051 --- 10.0.0.2 ping statistics --- 00:07:23.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.051 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:23.051 00:07:23.051 --- 10.0.0.1 ping statistics --- 00:07:23.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.051 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3180765 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3180765 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3180765 ']' 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.051 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 [2024-12-16 05:35:56.136133] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:23.052 [2024-12-16 05:35:56.136188] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.052 [2024-12-16 05:35:56.195971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.052 [2024-12-16 05:35:56.236388] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.052 [2024-12-16 05:35:56.236428] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.052 [2024-12-16 05:35:56.236435] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.052 [2024-12-16 05:35:56.236441] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.052 [2024-12-16 05:35:56.236446] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.052 [2024-12-16 05:35:56.236493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.052 [2024-12-16 05:35:56.236496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 [2024-12-16 05:35:56.365929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 [2024-12-16 05:35:56.382140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 NULL1 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 Delay0 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3180786 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:23.052 05:35:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:23.052 [2024-12-16 05:35:56.466744] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:24.956 05:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.956 05:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.956 05:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 [2024-12-16 05:35:58.564810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd4b4000c00 is same with the state(6) to be set 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 starting I/O failed: -6 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Write completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.956 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 starting I/O failed: -6 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 [2024-12-16 05:35:58.565415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ced0 is same with the state(6) to be set 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:24.957 Write completed with error (sct=0, sc=8) 00:07:24.957 Read completed with error (sct=0, sc=8) 00:07:25.893 [2024-12-16 05:35:59.520030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ab20 is same with the state(6) to be set 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 [2024-12-16 05:35:59.566689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd4b400d310 is same with the state(6) to be set 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 [2024-12-16 05:35:59.566982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204bc50 is same with the state(6) to be set 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 [2024-12-16 05:35:59.567136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ba70 is same with the state(6) to be set 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Write completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 Read completed with error (sct=0, sc=8) 00:07:25.894 [2024-12-16 05:35:59.567621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204d0b0 is same with the state(6) to be set 00:07:25.894 Initializing NVMe Controllers 00:07:25.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:25.894 Controller IO queue size 128, less than required. 00:07:25.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:25.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:25.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:25.894 Initialization complete. Launching workers. 00:07:25.894 ======================================================== 00:07:25.894 Latency(us) 00:07:25.894 Device Information : IOPS MiB/s Average min max 00:07:25.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.71 0.09 1007317.75 1296.90 2001327.79 00:07:25.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.89 0.08 877208.62 376.35 1011811.70 00:07:25.894 ======================================================== 00:07:25.894 Total : 334.60 0.16 947089.19 376.35 2001327.79 00:07:25.894 00:07:25.894 [2024-12-16 05:35:59.568167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ab20 (9): Bad file descriptor 00:07:25.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:25.894 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.894 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:25.894 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3180786 00:07:25.894 05:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3180786 00:07:26.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3180786) - No such process 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3180786 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3180786 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3180786 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.462 [2024-12-16 05:36:00.098137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3181479 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:26.462 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.462 [2024-12-16 05:36:00.157791] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:27.029 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.029 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:27.029 05:36:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.287 05:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.287 05:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:27.287 05:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.855 05:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.855 05:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:27.855 05:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.421 05:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.421 05:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:28.421 05:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.988 05:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.988 05:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:28.989 05:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.556 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.556 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:29.556 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.556 Initializing NVMe Controllers 00:07:29.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:29.557 Controller IO queue size 128, less than required. 00:07:29.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:29.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:29.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:29.557 Initialization complete. Launching workers. 00:07:29.557 ======================================================== 00:07:29.557 Latency(us) 00:07:29.557 Device Information : IOPS MiB/s Average min max 00:07:29.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003531.03 1000132.76 1041337.19 00:07:29.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005465.21 1000200.63 1041900.26 00:07:29.557 ======================================================== 00:07:29.557 Total : 256.00 0.12 1004498.12 1000132.76 1041900.26 00:07:29.557 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3181479 00:07:29.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3181479) - No such process 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3181479 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.815 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.815 rmmod nvme_tcp 00:07:29.815 rmmod nvme_fabrics 00:07:30.074 rmmod nvme_keyring 00:07:30.074 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.074 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:30.074 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:30.074 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3180765 ']' 00:07:30.074 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3180765 00:07:30.074 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3180765 ']' 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3180765 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3180765 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3180765' 00:07:30.075 killing process with pid 3180765 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3180765 00:07:30.075 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3180765 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.334 05:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.238 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:32.238 00:07:32.238 real 0m15.988s 00:07:32.238 user 0m29.054s 00:07:32.238 sys 0m5.371s 00:07:32.238 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.238 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.238 ************************************ 00:07:32.238 END TEST nvmf_delete_subsystem 00:07:32.238 ************************************ 00:07:32.238 05:36:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:32.238 05:36:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:32.238 05:36:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.238 05:36:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:32.497 ************************************ 00:07:32.497 START TEST nvmf_host_management 00:07:32.497 ************************************ 00:07:32.497 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:32.497 * Looking for test storage... 00:07:32.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.497 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:32.497 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:32.497 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:32.497 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:32.497 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.498 --rc genhtml_branch_coverage=1 00:07:32.498 --rc genhtml_function_coverage=1 00:07:32.498 --rc genhtml_legend=1 00:07:32.498 --rc geninfo_all_blocks=1 00:07:32.498 --rc geninfo_unexecuted_blocks=1 00:07:32.498 00:07:32.498 ' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.498 --rc genhtml_branch_coverage=1 00:07:32.498 --rc genhtml_function_coverage=1 00:07:32.498 --rc genhtml_legend=1 00:07:32.498 --rc geninfo_all_blocks=1 00:07:32.498 --rc geninfo_unexecuted_blocks=1 00:07:32.498 00:07:32.498 ' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.498 --rc genhtml_branch_coverage=1 00:07:32.498 --rc genhtml_function_coverage=1 00:07:32.498 --rc genhtml_legend=1 00:07:32.498 --rc geninfo_all_blocks=1 00:07:32.498 --rc geninfo_unexecuted_blocks=1 00:07:32.498 00:07:32.498 ' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:32.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.498 --rc genhtml_branch_coverage=1 00:07:32.498 --rc genhtml_function_coverage=1 00:07:32.498 --rc genhtml_legend=1 00:07:32.498 --rc geninfo_all_blocks=1 00:07:32.498 --rc geninfo_unexecuted_blocks=1 00:07:32.498 00:07:32.498 ' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:32.498 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:32.499 05:36:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:39.068 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:39.069 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:39.069 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:39.069 Found net devices under 0000:af:00.0: cvl_0_0 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:39.069 Found net devices under 0000:af:00.1: cvl_0_1 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:39.069 00:07:39.069 --- 10.0.0.2 ping statistics --- 00:07:39.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.069 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:39.069 00:07:39.069 --- 10.0.0.1 ping statistics --- 00:07:39.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.069 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.069 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3186135 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3186135 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3186135 ']' 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.070 05:36:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 [2024-12-16 05:36:12.027190] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.070 [2024-12-16 05:36:12.027233] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.070 [2024-12-16 05:36:12.085942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.070 [2024-12-16 05:36:12.125048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.070 [2024-12-16 05:36:12.125090] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.070 [2024-12-16 05:36:12.125100] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.070 [2024-12-16 05:36:12.125107] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.070 [2024-12-16 05:36:12.125112] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.070 [2024-12-16 05:36:12.125222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.070 [2024-12-16 05:36:12.125309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.070 [2024-12-16 05:36:12.125422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.070 [2024-12-16 05:36:12.125423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 [2024-12-16 05:36:12.273653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 Malloc0 00:07:39.070 [2024-12-16 05:36:12.333121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3186176 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3186176 /var/tmp/bdevperf.sock 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3186176 ']' 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:39.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:39.070 { 00:07:39.070 "params": { 00:07:39.070 "name": "Nvme$subsystem", 00:07:39.070 "trtype": "$TEST_TRANSPORT", 00:07:39.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.070 "adrfam": "ipv4", 00:07:39.070 "trsvcid": "$NVMF_PORT", 00:07:39.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.070 "hdgst": ${hdgst:-false}, 00:07:39.070 "ddgst": ${ddgst:-false} 00:07:39.070 }, 00:07:39.070 "method": "bdev_nvme_attach_controller" 00:07:39.070 } 00:07:39.070 EOF 00:07:39.070 )") 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:39.070 "params": { 00:07:39.070 "name": "Nvme0", 00:07:39.070 "trtype": "tcp", 00:07:39.070 "traddr": "10.0.0.2", 00:07:39.070 "adrfam": "ipv4", 00:07:39.070 "trsvcid": "4420", 00:07:39.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.070 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:39.070 "hdgst": false, 00:07:39.070 "ddgst": false 00:07:39.070 }, 00:07:39.070 "method": "bdev_nvme_attach_controller" 00:07:39.070 }' 00:07:39.070 [2024-12-16 05:36:12.429991] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.070 [2024-12-16 05:36:12.430036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186176 ] 00:07:39.070 [2024-12-16 05:36:12.487921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.070 [2024-12-16 05:36:12.526658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.070 Running I/O for 10 seconds... 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:39.070 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=92 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 92 -ge 100 ']' 00:07:39.071 05:36:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.331 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.331 [2024-12-16 05:36:13.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.083991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.083999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.084005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.084013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.084019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.084027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.084033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.084041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.084047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.084057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.084064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.331 [2024-12-16 05:36:13.084073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.331 [2024-12-16 05:36:13.084079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.332 [2024-12-16 05:36:13.084613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.332 [2024-12-16 05:36:13.084619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.333 [2024-12-16 05:36:13.084627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.333 [2024-12-16 05:36:13.084633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.333 [2024-12-16 05:36:13.084641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.333 [2024-12-16 05:36:13.084647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.333 [2024-12-16 05:36:13.084655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.333 [2024-12-16 05:36:13.084661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.333 [2024-12-16 05:36:13.084669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.333 [2024-12-16 05:36:13.084675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.333 [2024-12-16 05:36:13.084738] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1019d30 was disconnected and freed. reset controller. 00:07:39.333 [2024-12-16 05:36:13.085625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:39.333 task offset: 101888 on job bdev=Nvme0n1 fails 00:07:39.333 00:07:39.333 Latency(us) 00:07:39.333 [2024-12-16T04:36:13.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.333 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:39.333 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:39.333 Verification LBA range: start 0x0 length 0x400 00:07:39.333 Nvme0n1 : 0.40 1896.46 118.53 158.04 0.00 30317.79 1482.36 27462.70 00:07:39.333 [2024-12-16T04:36:13.189Z] =================================================================================================================== 00:07:39.333 [2024-12-16T04:36:13.189Z] Total : 1896.46 118.53 158.04 0.00 30317.79 1482.36 27462.70 00:07:39.333 [2024-12-16 05:36:13.088090] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.333 [2024-12-16 05:36:13.088112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00b50 (9): Bad file descriptor 00:07:39.333 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.333 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.333 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.333 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.333 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.333 05:36:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:39.333 [2024-12-16 05:36:13.139111] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3186176 00:07:40.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3186176) - No such process 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:40.268 { 00:07:40.268 "params": { 00:07:40.268 "name": "Nvme$subsystem", 00:07:40.268 "trtype": "$TEST_TRANSPORT", 00:07:40.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:40.268 "adrfam": "ipv4", 00:07:40.268 "trsvcid": "$NVMF_PORT", 00:07:40.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:40.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:40.268 "hdgst": ${hdgst:-false}, 00:07:40.268 "ddgst": ${ddgst:-false} 00:07:40.268 }, 00:07:40.268 "method": "bdev_nvme_attach_controller" 00:07:40.268 } 00:07:40.268 EOF 00:07:40.268 )") 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:40.268 05:36:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:40.268 "params": { 00:07:40.268 "name": "Nvme0", 00:07:40.268 "trtype": "tcp", 00:07:40.268 "traddr": "10.0.0.2", 00:07:40.268 "adrfam": "ipv4", 00:07:40.268 "trsvcid": "4420", 00:07:40.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:40.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:40.268 "hdgst": false, 00:07:40.268 "ddgst": false 00:07:40.268 }, 00:07:40.268 "method": "bdev_nvme_attach_controller" 00:07:40.268 }' 00:07:40.526 [2024-12-16 05:36:14.152179] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.526 [2024-12-16 05:36:14.152227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186429 ] 00:07:40.526 [2024-12-16 05:36:14.209225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.526 [2024-12-16 05:36:14.246873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.785 Running I/O for 1 seconds... 00:07:41.720 1984.00 IOPS, 124.00 MiB/s 00:07:41.720 Latency(us) 00:07:41.720 [2024-12-16T04:36:15.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.720 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:41.720 Verification LBA range: start 0x0 length 0x400 00:07:41.720 Nvme0n1 : 1.02 2005.29 125.33 0.00 0.00 31428.18 4743.56 27337.87 00:07:41.720 [2024-12-16T04:36:15.576Z] =================================================================================================================== 00:07:41.720 [2024-12-16T04:36:15.576Z] Total : 2005.29 125.33 0.00 0.00 31428.18 4743.56 27337.87 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.979 rmmod nvme_tcp 00:07:41.979 rmmod nvme_fabrics 00:07:41.979 rmmod nvme_keyring 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3186135 ']' 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3186135 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3186135 ']' 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3186135 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3186135 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3186135' 00:07:41.979 killing process with pid 3186135 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3186135 00:07:41.979 05:36:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3186135 00:07:42.238 [2024-12-16 05:36:15.984463] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.238 05:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.773 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:44.773 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:44.773 00:07:44.773 real 0m11.988s 00:07:44.774 user 0m19.148s 00:07:44.774 sys 0m5.404s 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.774 ************************************ 00:07:44.774 END TEST nvmf_host_management 00:07:44.774 ************************************ 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.774 ************************************ 00:07:44.774 START TEST nvmf_lvol 00:07:44.774 ************************************ 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:44.774 * Looking for test storage... 00:07:44.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:44.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.774 --rc genhtml_branch_coverage=1 00:07:44.774 --rc genhtml_function_coverage=1 00:07:44.774 --rc genhtml_legend=1 00:07:44.774 --rc geninfo_all_blocks=1 00:07:44.774 --rc geninfo_unexecuted_blocks=1 00:07:44.774 00:07:44.774 ' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:44.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.774 --rc genhtml_branch_coverage=1 00:07:44.774 --rc genhtml_function_coverage=1 00:07:44.774 --rc genhtml_legend=1 00:07:44.774 --rc geninfo_all_blocks=1 00:07:44.774 --rc geninfo_unexecuted_blocks=1 00:07:44.774 00:07:44.774 ' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:44.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.774 --rc genhtml_branch_coverage=1 00:07:44.774 --rc genhtml_function_coverage=1 00:07:44.774 --rc genhtml_legend=1 00:07:44.774 --rc geninfo_all_blocks=1 00:07:44.774 --rc geninfo_unexecuted_blocks=1 00:07:44.774 00:07:44.774 ' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:44.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.774 --rc genhtml_branch_coverage=1 00:07:44.774 --rc genhtml_function_coverage=1 00:07:44.774 --rc genhtml_legend=1 00:07:44.774 --rc geninfo_all_blocks=1 00:07:44.774 --rc geninfo_unexecuted_blocks=1 00:07:44.774 00:07:44.774 ' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.774 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.775 05:36:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:51.343 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:51.343 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:51.343 Found net devices under 0000:af:00.0: cvl_0_0 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:51.343 Found net devices under 0000:af:00.1: cvl_0_1 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.343 05:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:07:51.343 00:07:51.343 --- 10.0.0.2 ping statistics --- 00:07:51.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.343 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:51.343 00:07:51.343 --- 10.0.0.1 ping statistics --- 00:07:51.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.343 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:07:51.343 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3190340 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3190340 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3190340 ']' 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 [2024-12-16 05:36:24.365039] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.344 [2024-12-16 05:36:24.365082] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.344 [2024-12-16 05:36:24.423072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.344 [2024-12-16 05:36:24.461788] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.344 [2024-12-16 05:36:24.461825] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.344 [2024-12-16 05:36:24.461833] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.344 [2024-12-16 05:36:24.461841] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.344 [2024-12-16 05:36:24.461851] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.344 [2024-12-16 05:36:24.461897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.344 [2024-12-16 05:36:24.461991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.344 [2024-12-16 05:36:24.461993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:51.344 [2024-12-16 05:36:24.752712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:51.344 05:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:51.344 05:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:51.344 05:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:51.601 05:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:51.859 05:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=32c636f6-efd6-424a-b02f-d34428ab8512 00:07:51.859 05:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32c636f6-efd6-424a-b02f-d34428ab8512 lvol 20 00:07:52.117 05:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=36a57fe2-f8d2-41b7-8dfb-9f7c71b6e365 00:07:52.117 05:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.375 05:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36a57fe2-f8d2-41b7-8dfb-9f7c71b6e365 00:07:52.375 05:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:52.633 [2024-12-16 05:36:26.360308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.633 05:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.891 05:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3190750 00:07:52.891 05:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:52.891 05:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:53.824 05:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 36a57fe2-f8d2-41b7-8dfb-9f7c71b6e365 MY_SNAPSHOT 00:07:54.082 05:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eaa5a40d-95dd-4905-a391-259aa8652324 00:07:54.082 05:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 36a57fe2-f8d2-41b7-8dfb-9f7c71b6e365 30 00:07:54.340 05:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eaa5a40d-95dd-4905-a391-259aa8652324 MY_CLONE 00:07:54.598 05:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bc6bbe26-847c-4c2c-abd4-aa954cd5550c 00:07:54.598 05:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bc6bbe26-847c-4c2c-abd4-aa954cd5550c 00:07:55.164 05:36:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3190750 00:08:03.278 Initializing NVMe Controllers 00:08:03.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:03.278 Controller IO queue size 128, less than required. 00:08:03.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:03.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:03.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:03.278 Initialization complete. Launching workers. 00:08:03.278 ======================================================== 00:08:03.278 Latency(us) 00:08:03.278 Device Information : IOPS MiB/s Average min max 00:08:03.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12183.00 47.59 10512.73 989.89 57105.90 00:08:03.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12119.30 47.34 10567.19 3290.62 61874.28 00:08:03.278 ======================================================== 00:08:03.278 Total : 24302.30 94.93 10539.89 989.89 61874.28 00:08:03.278 00:08:03.279 05:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.537 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36a57fe2-f8d2-41b7-8dfb-9f7c71b6e365 00:08:03.537 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32c636f6-efd6-424a-b02f-d34428ab8512 00:08:03.795 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:03.795 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:03.795 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:03.795 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:03.795 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:03.795 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.796 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:03.796 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.796 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.796 rmmod nvme_tcp 00:08:03.796 rmmod nvme_fabrics 00:08:03.796 rmmod nvme_keyring 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3190340 ']' 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3190340 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3190340 ']' 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3190340 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3190340 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3190340' 00:08:04.055 killing process with pid 3190340 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3190340 00:08:04.055 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3190340 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.314 05:36:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.218 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.218 00:08:06.218 real 0m21.860s 00:08:06.218 user 1m3.047s 00:08:06.218 sys 0m7.434s 00:08:06.218 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.218 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.218 ************************************ 00:08:06.218 END TEST nvmf_lvol 00:08:06.218 ************************************ 00:08:06.218 05:36:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:06.218 05:36:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:06.218 05:36:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.218 05:36:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.477 ************************************ 00:08:06.477 START TEST nvmf_lvs_grow 00:08:06.477 ************************************ 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:06.477 * Looking for test storage... 00:08:06.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:06.477 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.478 --rc genhtml_branch_coverage=1 00:08:06.478 --rc genhtml_function_coverage=1 00:08:06.478 --rc genhtml_legend=1 00:08:06.478 --rc geninfo_all_blocks=1 00:08:06.478 --rc geninfo_unexecuted_blocks=1 00:08:06.478 00:08:06.478 ' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.478 --rc genhtml_branch_coverage=1 00:08:06.478 --rc genhtml_function_coverage=1 00:08:06.478 --rc genhtml_legend=1 00:08:06.478 --rc geninfo_all_blocks=1 00:08:06.478 --rc geninfo_unexecuted_blocks=1 00:08:06.478 00:08:06.478 ' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.478 --rc genhtml_branch_coverage=1 00:08:06.478 --rc genhtml_function_coverage=1 00:08:06.478 --rc genhtml_legend=1 00:08:06.478 --rc geninfo_all_blocks=1 00:08:06.478 --rc geninfo_unexecuted_blocks=1 00:08:06.478 00:08:06.478 ' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.478 --rc genhtml_branch_coverage=1 00:08:06.478 --rc genhtml_function_coverage=1 00:08:06.478 --rc genhtml_legend=1 00:08:06.478 --rc geninfo_all_blocks=1 00:08:06.478 --rc geninfo_unexecuted_blocks=1 00:08:06.478 00:08:06.478 ' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.478 05:36:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:11.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:11.844 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:11.844 Found net devices under 0000:af:00.0: cvl_0_0 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:11.844 Found net devices under 0000:af:00.1: cvl_0_1 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.844 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.845 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:08:12.104 00:08:12.104 --- 10.0.0.2 ping statistics --- 00:08:12.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.104 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:12.104 00:08:12.104 --- 10.0.0.1 ping statistics --- 00:08:12.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.104 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3196091 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3196091 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3196091 ']' 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.104 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.104 [2024-12-16 05:36:45.814500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:12.104 [2024-12-16 05:36:45.814543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.104 [2024-12-16 05:36:45.872898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.104 [2024-12-16 05:36:45.911655] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.104 [2024-12-16 05:36:45.911692] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.104 [2024-12-16 05:36:45.911699] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.104 [2024-12-16 05:36:45.911705] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.104 [2024-12-16 05:36:45.911710] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.104 [2024-12-16 05:36:45.911728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.364 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.364 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:12.364 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:12.364 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.364 05:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.364 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.364 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:12.364 [2024-12-16 05:36:46.201882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.622 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:12.622 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.622 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.622 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.622 ************************************ 00:08:12.622 START TEST lvs_grow_clean 00:08:12.622 ************************************ 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:12.623 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:12.881 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:12.881 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:12.881 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:13.140 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:13.140 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:13.140 05:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 lvol 150 00:08:13.405 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bd74d9cb-cb53-43fc-9d25-dad628d054a8 00:08:13.405 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.405 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:13.405 [2024-12-16 05:36:47.221747] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:13.405 [2024-12-16 05:36:47.221794] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:13.405 true 00:08:13.405 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:13.405 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:13.667 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:13.667 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:13.926 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bd74d9cb-cb53-43fc-9d25-dad628d054a8 00:08:14.185 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:14.185 [2024-12-16 05:36:47.939897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.185 05:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3196428 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3196428 /var/tmp/bdevperf.sock 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3196428 ']' 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:14.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.444 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:14.444 [2024-12-16 05:36:48.175069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:14.444 [2024-12-16 05:36:48.175115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196428 ] 00:08:14.444 [2024-12-16 05:36:48.230731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.444 [2024-12-16 05:36:48.270477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.703 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.703 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:14.703 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:14.962 Nvme0n1 00:08:14.962 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:15.221 [ 00:08:15.221 { 00:08:15.221 "name": "Nvme0n1", 00:08:15.221 "aliases": [ 00:08:15.221 "bd74d9cb-cb53-43fc-9d25-dad628d054a8" 00:08:15.221 ], 00:08:15.221 "product_name": "NVMe disk", 00:08:15.221 "block_size": 4096, 00:08:15.221 "num_blocks": 38912, 00:08:15.221 "uuid": "bd74d9cb-cb53-43fc-9d25-dad628d054a8", 00:08:15.221 "numa_id": 1, 00:08:15.221 "assigned_rate_limits": { 00:08:15.221 "rw_ios_per_sec": 0, 00:08:15.221 "rw_mbytes_per_sec": 0, 00:08:15.221 "r_mbytes_per_sec": 0, 00:08:15.221 "w_mbytes_per_sec": 0 00:08:15.221 }, 00:08:15.221 "claimed": false, 00:08:15.221 "zoned": false, 00:08:15.221 "supported_io_types": { 00:08:15.221 "read": true, 00:08:15.221 "write": true, 00:08:15.221 "unmap": true, 00:08:15.221 "flush": true, 00:08:15.221 "reset": true, 00:08:15.221 "nvme_admin": true, 00:08:15.221 "nvme_io": true, 00:08:15.221 "nvme_io_md": false, 00:08:15.221 "write_zeroes": true, 00:08:15.221 "zcopy": false, 00:08:15.221 "get_zone_info": false, 00:08:15.221 "zone_management": false, 00:08:15.221 "zone_append": false, 00:08:15.221 "compare": true, 00:08:15.221 "compare_and_write": true, 00:08:15.221 "abort": true, 00:08:15.221 "seek_hole": false, 00:08:15.221 "seek_data": false, 00:08:15.221 "copy": true, 00:08:15.222 "nvme_iov_md": false 00:08:15.222 }, 00:08:15.222 "memory_domains": [ 00:08:15.222 { 00:08:15.222 "dma_device_id": "system", 00:08:15.222 "dma_device_type": 1 00:08:15.222 } 00:08:15.222 ], 00:08:15.222 "driver_specific": { 00:08:15.222 "nvme": [ 00:08:15.222 { 00:08:15.222 "trid": { 00:08:15.222 "trtype": "TCP", 00:08:15.222 "adrfam": "IPv4", 00:08:15.222 "traddr": "10.0.0.2", 00:08:15.222 "trsvcid": "4420", 00:08:15.222 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:15.222 }, 00:08:15.222 "ctrlr_data": { 00:08:15.222 "cntlid": 1, 00:08:15.222 "vendor_id": "0x8086", 00:08:15.222 "model_number": "SPDK bdev Controller", 00:08:15.222 "serial_number": "SPDK0", 00:08:15.222 "firmware_revision": "24.09.1", 00:08:15.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.222 "oacs": { 00:08:15.222 "security": 0, 00:08:15.222 "format": 0, 00:08:15.222 "firmware": 0, 00:08:15.222 "ns_manage": 0 00:08:15.222 }, 00:08:15.222 "multi_ctrlr": true, 00:08:15.222 "ana_reporting": false 00:08:15.222 }, 00:08:15.222 "vs": { 00:08:15.222 "nvme_version": "1.3" 00:08:15.222 }, 00:08:15.222 "ns_data": { 00:08:15.222 "id": 1, 00:08:15.222 "can_share": true 00:08:15.222 } 00:08:15.222 } 00:08:15.222 ], 00:08:15.222 "mp_policy": "active_passive" 00:08:15.222 } 00:08:15.222 } 00:08:15.222 ] 00:08:15.222 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3196595 00:08:15.222 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:15.222 05:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:15.222 Running I/O for 10 seconds... 00:08:16.600 Latency(us) 00:08:16.600 [2024-12-16T04:36:50.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.600 Nvme0n1 : 1.00 23308.00 91.05 0.00 0.00 0.00 0.00 0.00 00:08:16.600 [2024-12-16T04:36:50.456Z] =================================================================================================================== 00:08:16.600 [2024-12-16T04:36:50.456Z] Total : 23308.00 91.05 0.00 0.00 0.00 0.00 0.00 00:08:16.600 00:08:17.168 05:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:17.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.428 Nvme0n1 : 2.00 23211.00 90.67 0.00 0.00 0.00 0.00 0.00 00:08:17.428 [2024-12-16T04:36:51.284Z] =================================================================================================================== 00:08:17.428 [2024-12-16T04:36:51.284Z] Total : 23211.00 90.67 0.00 0.00 0.00 0.00 0.00 00:08:17.428 00:08:17.428 true 00:08:17.428 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:17.428 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:17.687 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:17.687 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:17.687 05:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3196595 00:08:18.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.254 Nvme0n1 : 3.00 23314.67 91.07 0.00 0.00 0.00 0.00 0.00 00:08:18.254 [2024-12-16T04:36:52.110Z] =================================================================================================================== 00:08:18.254 [2024-12-16T04:36:52.110Z] Total : 23314.67 91.07 0.00 0.00 0.00 0.00 0.00 00:08:18.254 00:08:19.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.191 Nvme0n1 : 4.00 23425.50 91.51 0.00 0.00 0.00 0.00 0.00 00:08:19.191 [2024-12-16T04:36:53.047Z] =================================================================================================================== 00:08:19.191 [2024-12-16T04:36:53.047Z] Total : 23425.50 91.51 0.00 0.00 0.00 0.00 0.00 00:08:19.191 00:08:20.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.569 Nvme0n1 : 5.00 23477.80 91.71 0.00 0.00 0.00 0.00 0.00 00:08:20.569 [2024-12-16T04:36:54.425Z] =================================================================================================================== 00:08:20.569 [2024-12-16T04:36:54.425Z] Total : 23477.80 91.71 0.00 0.00 0.00 0.00 0.00 00:08:20.569 00:08:21.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.505 Nvme0n1 : 6.00 23533.83 91.93 0.00 0.00 0.00 0.00 0.00 00:08:21.505 [2024-12-16T04:36:55.361Z] =================================================================================================================== 00:08:21.505 [2024-12-16T04:36:55.361Z] Total : 23533.83 91.93 0.00 0.00 0.00 0.00 0.00 00:08:21.505 00:08:22.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.441 Nvme0n1 : 7.00 23563.00 92.04 0.00 0.00 0.00 0.00 0.00 00:08:22.441 [2024-12-16T04:36:56.297Z] =================================================================================================================== 00:08:22.441 [2024-12-16T04:36:56.297Z] Total : 23563.00 92.04 0.00 0.00 0.00 0.00 0.00 00:08:22.441 00:08:23.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.378 Nvme0n1 : 8.00 23598.75 92.18 0.00 0.00 0.00 0.00 0.00 00:08:23.378 [2024-12-16T04:36:57.234Z] =================================================================================================================== 00:08:23.378 [2024-12-16T04:36:57.234Z] Total : 23598.75 92.18 0.00 0.00 0.00 0.00 0.00 00:08:23.378 00:08:24.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.314 Nvme0n1 : 9.00 23612.33 92.24 0.00 0.00 0.00 0.00 0.00 00:08:24.314 [2024-12-16T04:36:58.170Z] =================================================================================================================== 00:08:24.314 [2024-12-16T04:36:58.170Z] Total : 23612.33 92.24 0.00 0.00 0.00 0.00 0.00 00:08:24.314 00:08:25.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.251 Nvme0n1 : 10.00 23635.50 92.33 0.00 0.00 0.00 0.00 0.00 00:08:25.251 [2024-12-16T04:36:59.107Z] =================================================================================================================== 00:08:25.251 [2024-12-16T04:36:59.107Z] Total : 23635.50 92.33 0.00 0.00 0.00 0.00 0.00 00:08:25.251 00:08:25.251 00:08:25.251 Latency(us) 00:08:25.251 [2024-12-16T04:36:59.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.251 Nvme0n1 : 10.00 23637.60 92.33 0.00 0.00 5411.89 1419.95 10673.01 00:08:25.251 [2024-12-16T04:36:59.107Z] =================================================================================================================== 00:08:25.251 [2024-12-16T04:36:59.107Z] Total : 23637.60 92.33 0.00 0.00 5411.89 1419.95 10673.01 00:08:25.251 { 00:08:25.251 "results": [ 00:08:25.251 { 00:08:25.251 "job": "Nvme0n1", 00:08:25.251 "core_mask": "0x2", 00:08:25.251 "workload": "randwrite", 00:08:25.251 "status": "finished", 00:08:25.251 "queue_depth": 128, 00:08:25.251 "io_size": 4096, 00:08:25.251 "runtime": 10.004528, 00:08:25.251 "iops": 23637.596896125433, 00:08:25.251 "mibps": 92.33436287548997, 00:08:25.251 "io_failed": 0, 00:08:25.251 "io_timeout": 0, 00:08:25.251 "avg_latency_us": 5411.892292396735, 00:08:25.251 "min_latency_us": 1419.9466666666667, 00:08:25.251 "max_latency_us": 10673.005714285715 00:08:25.251 } 00:08:25.251 ], 00:08:25.251 "core_count": 1 00:08:25.251 } 00:08:25.251 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3196428 00:08:25.251 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3196428 ']' 00:08:25.251 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3196428 00:08:25.251 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:25.251 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.251 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3196428 00:08:25.511 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:25.511 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:25.511 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3196428' 00:08:25.511 killing process with pid 3196428 00:08:25.511 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3196428 00:08:25.511 Received shutdown signal, test time was about 10.000000 seconds 00:08:25.511 00:08:25.511 Latency(us) 00:08:25.511 [2024-12-16T04:36:59.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.511 [2024-12-16T04:36:59.367Z] =================================================================================================================== 00:08:25.511 [2024-12-16T04:36:59.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:25.511 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3196428 00:08:25.511 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.770 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.028 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:26.028 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:26.287 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:26.287 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:26.287 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.287 [2024-12-16 05:37:00.067430] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:26.287 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:26.546 request: 00:08:26.546 { 00:08:26.546 "uuid": "fdd19104-f8c2-4045-8ab3-d7d3db664dd8", 00:08:26.546 "method": "bdev_lvol_get_lvstores", 00:08:26.546 "req_id": 1 00:08:26.546 } 00:08:26.546 Got JSON-RPC error response 00:08:26.546 response: 00:08:26.546 { 00:08:26.546 "code": -19, 00:08:26.546 "message": "No such device" 00:08:26.546 } 00:08:26.546 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:26.546 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.546 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.546 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.546 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.804 aio_bdev 00:08:26.804 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bd74d9cb-cb53-43fc-9d25-dad628d054a8 00:08:26.804 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=bd74d9cb-cb53-43fc-9d25-dad628d054a8 00:08:26.804 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.804 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:26.804 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.804 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.804 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.063 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bd74d9cb-cb53-43fc-9d25-dad628d054a8 -t 2000 00:08:27.063 [ 00:08:27.063 { 00:08:27.063 "name": "bd74d9cb-cb53-43fc-9d25-dad628d054a8", 00:08:27.063 "aliases": [ 00:08:27.063 "lvs/lvol" 00:08:27.063 ], 00:08:27.063 "product_name": "Logical Volume", 00:08:27.063 "block_size": 4096, 00:08:27.063 "num_blocks": 38912, 00:08:27.063 "uuid": "bd74d9cb-cb53-43fc-9d25-dad628d054a8", 00:08:27.063 "assigned_rate_limits": { 00:08:27.063 "rw_ios_per_sec": 0, 00:08:27.063 "rw_mbytes_per_sec": 0, 00:08:27.063 "r_mbytes_per_sec": 0, 00:08:27.063 "w_mbytes_per_sec": 0 00:08:27.063 }, 00:08:27.063 "claimed": false, 00:08:27.063 "zoned": false, 00:08:27.063 "supported_io_types": { 00:08:27.063 "read": true, 00:08:27.063 "write": true, 00:08:27.063 "unmap": true, 00:08:27.063 "flush": false, 00:08:27.063 "reset": true, 00:08:27.063 "nvme_admin": false, 00:08:27.063 "nvme_io": false, 00:08:27.063 "nvme_io_md": false, 00:08:27.063 "write_zeroes": true, 00:08:27.063 "zcopy": false, 00:08:27.063 "get_zone_info": false, 00:08:27.063 "zone_management": false, 00:08:27.063 "zone_append": false, 00:08:27.063 "compare": false, 00:08:27.063 "compare_and_write": false, 00:08:27.063 "abort": false, 00:08:27.063 "seek_hole": true, 00:08:27.063 "seek_data": true, 00:08:27.063 "copy": false, 00:08:27.063 "nvme_iov_md": false 00:08:27.063 }, 00:08:27.063 "driver_specific": { 00:08:27.063 "lvol": { 00:08:27.063 "lvol_store_uuid": "fdd19104-f8c2-4045-8ab3-d7d3db664dd8", 00:08:27.063 "base_bdev": "aio_bdev", 00:08:27.063 "thin_provision": false, 00:08:27.063 "num_allocated_clusters": 38, 00:08:27.063 "snapshot": false, 00:08:27.063 "clone": false, 00:08:27.063 "esnap_clone": false 00:08:27.063 } 00:08:27.063 } 00:08:27.063 } 00:08:27.063 ] 00:08:27.063 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:27.063 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:27.063 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:27.322 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:27.322 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.322 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:27.582 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.582 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bd74d9cb-cb53-43fc-9d25-dad628d054a8 00:08:27.840 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fdd19104-f8c2-4045-8ab3-d7d3db664dd8 00:08:27.841 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.100 00:08:28.100 real 0m15.621s 00:08:28.100 user 0m15.199s 00:08:28.100 sys 0m1.440s 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:28.100 ************************************ 00:08:28.100 END TEST lvs_grow_clean 00:08:28.100 ************************************ 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.100 ************************************ 00:08:28.100 START TEST lvs_grow_dirty 00:08:28.100 ************************************ 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:28.100 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.358 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.359 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.359 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:28.359 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:28.617 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:28.617 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:28.617 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:28.875 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:28.875 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:28.875 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2554edc9-f352-47fc-b737-60a8867dc0f0 lvol 150 00:08:29.134 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cf3961d3-c835-4ec6-bc80-2ce249de2ec1 00:08:29.134 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.134 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:29.134 [2024-12-16 05:37:02.900684] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:29.134 [2024-12-16 05:37:02.900728] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:29.134 true 00:08:29.134 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:29.134 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:29.391 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:29.391 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:29.650 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf3961d3-c835-4ec6-bc80-2ce249de2ec1 00:08:29.650 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:29.909 [2024-12-16 05:37:03.670966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.909 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3199118 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3199118 /var/tmp/bdevperf.sock 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3199118 ']' 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:30.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.168 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:30.168 [2024-12-16 05:37:03.897690] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:30.168 [2024-12-16 05:37:03.897731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199118 ] 00:08:30.168 [2024-12-16 05:37:03.950369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.168 [2024-12-16 05:37:03.988538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.427 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.427 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:30.427 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:30.686 Nvme0n1 00:08:30.686 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:30.686 [ 00:08:30.686 { 00:08:30.686 "name": "Nvme0n1", 00:08:30.686 "aliases": [ 00:08:30.686 "cf3961d3-c835-4ec6-bc80-2ce249de2ec1" 00:08:30.686 ], 00:08:30.686 "product_name": "NVMe disk", 00:08:30.686 "block_size": 4096, 00:08:30.686 "num_blocks": 38912, 00:08:30.686 "uuid": "cf3961d3-c835-4ec6-bc80-2ce249de2ec1", 00:08:30.686 "numa_id": 1, 00:08:30.686 "assigned_rate_limits": { 00:08:30.686 "rw_ios_per_sec": 0, 00:08:30.686 "rw_mbytes_per_sec": 0, 00:08:30.686 "r_mbytes_per_sec": 0, 00:08:30.686 "w_mbytes_per_sec": 0 00:08:30.686 }, 00:08:30.686 "claimed": false, 00:08:30.686 "zoned": false, 00:08:30.686 "supported_io_types": { 00:08:30.686 "read": true, 00:08:30.686 "write": true, 00:08:30.686 "unmap": true, 00:08:30.686 "flush": true, 00:08:30.686 "reset": true, 00:08:30.686 "nvme_admin": true, 00:08:30.686 "nvme_io": true, 00:08:30.686 "nvme_io_md": false, 00:08:30.686 "write_zeroes": true, 00:08:30.686 "zcopy": false, 00:08:30.686 "get_zone_info": false, 00:08:30.686 "zone_management": false, 00:08:30.686 "zone_append": false, 00:08:30.686 "compare": true, 00:08:30.686 "compare_and_write": true, 00:08:30.686 "abort": true, 00:08:30.686 "seek_hole": false, 00:08:30.686 "seek_data": false, 00:08:30.686 "copy": true, 00:08:30.686 "nvme_iov_md": false 00:08:30.686 }, 00:08:30.686 "memory_domains": [ 00:08:30.686 { 00:08:30.686 "dma_device_id": "system", 00:08:30.686 "dma_device_type": 1 00:08:30.686 } 00:08:30.686 ], 00:08:30.686 "driver_specific": { 00:08:30.686 "nvme": [ 00:08:30.686 { 00:08:30.686 "trid": { 00:08:30.686 "trtype": "TCP", 00:08:30.686 "adrfam": "IPv4", 00:08:30.686 "traddr": "10.0.0.2", 00:08:30.686 "trsvcid": "4420", 00:08:30.686 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:30.686 }, 00:08:30.686 "ctrlr_data": { 00:08:30.686 "cntlid": 1, 00:08:30.686 "vendor_id": "0x8086", 00:08:30.686 "model_number": "SPDK bdev Controller", 00:08:30.686 "serial_number": "SPDK0", 00:08:30.686 "firmware_revision": "24.09.1", 00:08:30.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.686 "oacs": { 00:08:30.686 "security": 0, 00:08:30.686 "format": 0, 00:08:30.686 "firmware": 0, 00:08:30.686 "ns_manage": 0 00:08:30.686 }, 00:08:30.686 "multi_ctrlr": true, 00:08:30.686 "ana_reporting": false 00:08:30.686 }, 00:08:30.686 "vs": { 00:08:30.686 "nvme_version": "1.3" 00:08:30.686 }, 00:08:30.686 "ns_data": { 00:08:30.686 "id": 1, 00:08:30.686 "can_share": true 00:08:30.686 } 00:08:30.686 } 00:08:30.686 ], 00:08:30.686 "mp_policy": "active_passive" 00:08:30.686 } 00:08:30.686 } 00:08:30.686 ] 00:08:30.686 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3199235 00:08:30.686 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:30.686 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.945 Running I/O for 10 seconds... 00:08:31.881 Latency(us) 00:08:31.881 [2024-12-16T04:37:05.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.881 Nvme0n1 : 1.00 23289.00 90.97 0.00 0.00 0.00 0.00 0.00 00:08:31.881 [2024-12-16T04:37:05.737Z] =================================================================================================================== 00:08:31.881 [2024-12-16T04:37:05.737Z] Total : 23289.00 90.97 0.00 0.00 0.00 0.00 0.00 00:08:31.881 00:08:32.818 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:32.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.818 Nvme0n1 : 2.00 23402.50 91.42 0.00 0.00 0.00 0.00 0.00 00:08:32.818 [2024-12-16T04:37:06.674Z] =================================================================================================================== 00:08:32.818 [2024-12-16T04:37:06.674Z] Total : 23402.50 91.42 0.00 0.00 0.00 0.00 0.00 00:08:32.818 00:08:33.077 true 00:08:33.077 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:33.077 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:33.336 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:33.336 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:33.336 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3199235 00:08:33.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.904 Nvme0n1 : 3.00 23497.33 91.79 0.00 0.00 0.00 0.00 0.00 00:08:33.904 [2024-12-16T04:37:07.760Z] =================================================================================================================== 00:08:33.904 [2024-12-16T04:37:07.760Z] Total : 23497.33 91.79 0.00 0.00 0.00 0.00 0.00 00:08:33.904 00:08:34.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.840 Nvme0n1 : 4.00 23491.00 91.76 0.00 0.00 0.00 0.00 0.00 00:08:34.840 [2024-12-16T04:37:08.696Z] =================================================================================================================== 00:08:34.840 [2024-12-16T04:37:08.696Z] Total : 23491.00 91.76 0.00 0.00 0.00 0.00 0.00 00:08:34.840 00:08:35.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.777 Nvme0n1 : 5.00 23561.80 92.04 0.00 0.00 0.00 0.00 0.00 00:08:35.777 [2024-12-16T04:37:09.633Z] =================================================================================================================== 00:08:35.777 [2024-12-16T04:37:09.633Z] Total : 23561.80 92.04 0.00 0.00 0.00 0.00 0.00 00:08:35.777 00:08:37.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.155 Nvme0n1 : 6.00 23553.00 92.00 0.00 0.00 0.00 0.00 0.00 00:08:37.155 [2024-12-16T04:37:11.011Z] =================================================================================================================== 00:08:37.155 [2024-12-16T04:37:11.011Z] Total : 23553.00 92.00 0.00 0.00 0.00 0.00 0.00 00:08:37.155 00:08:38.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.092 Nvme0n1 : 7.00 23573.57 92.08 0.00 0.00 0.00 0.00 0.00 00:08:38.092 [2024-12-16T04:37:11.948Z] =================================================================================================================== 00:08:38.092 [2024-12-16T04:37:11.948Z] Total : 23573.57 92.08 0.00 0.00 0.00 0.00 0.00 00:08:38.092 00:08:39.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.028 Nvme0n1 : 8.00 23607.00 92.21 0.00 0.00 0.00 0.00 0.00 00:08:39.028 [2024-12-16T04:37:12.884Z] =================================================================================================================== 00:08:39.028 [2024-12-16T04:37:12.884Z] Total : 23607.00 92.21 0.00 0.00 0.00 0.00 0.00 00:08:39.028 00:08:39.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.965 Nvme0n1 : 9.00 23635.44 92.33 0.00 0.00 0.00 0.00 0.00 00:08:39.965 [2024-12-16T04:37:13.821Z] =================================================================================================================== 00:08:39.965 [2024-12-16T04:37:13.821Z] Total : 23635.44 92.33 0.00 0.00 0.00 0.00 0.00 00:08:39.965 00:08:40.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.902 Nvme0n1 : 10.00 23660.40 92.42 0.00 0.00 0.00 0.00 0.00 00:08:40.902 [2024-12-16T04:37:14.758Z] =================================================================================================================== 00:08:40.902 [2024-12-16T04:37:14.758Z] Total : 23660.40 92.42 0.00 0.00 0.00 0.00 0.00 00:08:40.902 00:08:40.902 00:08:40.902 Latency(us) 00:08:40.902 [2024-12-16T04:37:14.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.902 Nvme0n1 : 10.01 23660.44 92.42 0.00 0.00 5406.44 3214.38 13481.69 00:08:40.902 [2024-12-16T04:37:14.758Z] =================================================================================================================== 00:08:40.902 [2024-12-16T04:37:14.758Z] Total : 23660.44 92.42 0.00 0.00 5406.44 3214.38 13481.69 00:08:40.902 { 00:08:40.902 "results": [ 00:08:40.902 { 00:08:40.902 "job": "Nvme0n1", 00:08:40.902 "core_mask": "0x2", 00:08:40.902 "workload": "randwrite", 00:08:40.902 "status": "finished", 00:08:40.902 "queue_depth": 128, 00:08:40.902 "io_size": 4096, 00:08:40.902 "runtime": 10.005394, 00:08:40.902 "iops": 23660.437559980146, 00:08:40.902 "mibps": 92.42358421867245, 00:08:40.902 "io_failed": 0, 00:08:40.902 "io_timeout": 0, 00:08:40.902 "avg_latency_us": 5406.443097832952, 00:08:40.902 "min_latency_us": 3214.384761904762, 00:08:40.902 "max_latency_us": 13481.691428571428 00:08:40.902 } 00:08:40.902 ], 00:08:40.902 "core_count": 1 00:08:40.902 } 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3199118 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3199118 ']' 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3199118 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3199118 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3199118' 00:08:40.902 killing process with pid 3199118 00:08:40.902 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3199118 00:08:40.902 Received shutdown signal, test time was about 10.000000 seconds 00:08:40.902 00:08:40.903 Latency(us) 00:08:40.903 [2024-12-16T04:37:14.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.903 [2024-12-16T04:37:14.759Z] =================================================================================================================== 00:08:40.903 [2024-12-16T04:37:14.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:40.903 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3199118 00:08:41.161 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.421 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3196091 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3196091 00:08:41.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3196091 Killed "${NVMF_APP[@]}" "$@" 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.680 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3201143 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3201143 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3201143 ']' 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.939 [2024-12-16 05:37:15.587669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:41.939 [2024-12-16 05:37:15.587713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.939 [2024-12-16 05:37:15.645435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.939 [2024-12-16 05:37:15.684664] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.939 [2024-12-16 05:37:15.684700] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.939 [2024-12-16 05:37:15.684707] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.939 [2024-12-16 05:37:15.684713] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.939 [2024-12-16 05:37:15.684718] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.939 [2024-12-16 05:37:15.684735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.939 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.198 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.198 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.198 [2024-12-16 05:37:15.976042] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:42.199 [2024-12-16 05:37:15.976168] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:42.199 [2024-12-16 05:37:15.976195] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:42.199 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:42.199 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cf3961d3-c835-4ec6-bc80-2ce249de2ec1 00:08:42.199 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=cf3961d3-c835-4ec6-bc80-2ce249de2ec1 00:08:42.199 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.199 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:42.199 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.199 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.199 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.457 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cf3961d3-c835-4ec6-bc80-2ce249de2ec1 -t 2000 00:08:42.717 [ 00:08:42.717 { 00:08:42.717 "name": "cf3961d3-c835-4ec6-bc80-2ce249de2ec1", 00:08:42.717 "aliases": [ 00:08:42.717 "lvs/lvol" 00:08:42.717 ], 00:08:42.717 "product_name": "Logical Volume", 00:08:42.717 "block_size": 4096, 00:08:42.717 "num_blocks": 38912, 00:08:42.717 "uuid": "cf3961d3-c835-4ec6-bc80-2ce249de2ec1", 00:08:42.717 "assigned_rate_limits": { 00:08:42.717 "rw_ios_per_sec": 0, 00:08:42.717 "rw_mbytes_per_sec": 0, 00:08:42.717 "r_mbytes_per_sec": 0, 00:08:42.717 "w_mbytes_per_sec": 0 00:08:42.717 }, 00:08:42.717 "claimed": false, 00:08:42.717 "zoned": false, 00:08:42.717 "supported_io_types": { 00:08:42.717 "read": true, 00:08:42.717 "write": true, 00:08:42.717 "unmap": true, 00:08:42.717 "flush": false, 00:08:42.717 "reset": true, 00:08:42.717 "nvme_admin": false, 00:08:42.717 "nvme_io": false, 00:08:42.717 "nvme_io_md": false, 00:08:42.717 "write_zeroes": true, 00:08:42.717 "zcopy": false, 00:08:42.717 "get_zone_info": false, 00:08:42.717 "zone_management": false, 00:08:42.717 "zone_append": false, 00:08:42.717 "compare": false, 00:08:42.717 "compare_and_write": false, 00:08:42.717 "abort": false, 00:08:42.717 "seek_hole": true, 00:08:42.717 "seek_data": true, 00:08:42.717 "copy": false, 00:08:42.717 "nvme_iov_md": false 00:08:42.717 }, 00:08:42.717 "driver_specific": { 00:08:42.717 "lvol": { 00:08:42.717 "lvol_store_uuid": "2554edc9-f352-47fc-b737-60a8867dc0f0", 00:08:42.717 "base_bdev": "aio_bdev", 00:08:42.717 "thin_provision": false, 00:08:42.717 "num_allocated_clusters": 38, 00:08:42.717 "snapshot": false, 00:08:42.717 "clone": false, 00:08:42.717 "esnap_clone": false 00:08:42.717 } 00:08:42.717 } 00:08:42.717 } 00:08:42.717 ] 00:08:42.717 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:42.717 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:42.717 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:42.717 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:42.717 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:42.717 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:42.976 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:42.976 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.236 [2024-12-16 05:37:16.925120] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.236 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:43.495 request: 00:08:43.495 { 00:08:43.495 "uuid": "2554edc9-f352-47fc-b737-60a8867dc0f0", 00:08:43.495 "method": "bdev_lvol_get_lvstores", 00:08:43.495 "req_id": 1 00:08:43.495 } 00:08:43.495 Got JSON-RPC error response 00:08:43.495 response: 00:08:43.495 { 00:08:43.495 "code": -19, 00:08:43.495 "message": "No such device" 00:08:43.495 } 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.495 aio_bdev 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cf3961d3-c835-4ec6-bc80-2ce249de2ec1 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=cf3961d3-c835-4ec6-bc80-2ce249de2ec1 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.495 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:43.754 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cf3961d3-c835-4ec6-bc80-2ce249de2ec1 -t 2000 00:08:44.013 [ 00:08:44.013 { 00:08:44.013 "name": "cf3961d3-c835-4ec6-bc80-2ce249de2ec1", 00:08:44.013 "aliases": [ 00:08:44.013 "lvs/lvol" 00:08:44.013 ], 00:08:44.013 "product_name": "Logical Volume", 00:08:44.013 "block_size": 4096, 00:08:44.013 "num_blocks": 38912, 00:08:44.013 "uuid": "cf3961d3-c835-4ec6-bc80-2ce249de2ec1", 00:08:44.013 "assigned_rate_limits": { 00:08:44.013 "rw_ios_per_sec": 0, 00:08:44.013 "rw_mbytes_per_sec": 0, 00:08:44.013 "r_mbytes_per_sec": 0, 00:08:44.013 "w_mbytes_per_sec": 0 00:08:44.013 }, 00:08:44.013 "claimed": false, 00:08:44.013 "zoned": false, 00:08:44.013 "supported_io_types": { 00:08:44.013 "read": true, 00:08:44.013 "write": true, 00:08:44.013 "unmap": true, 00:08:44.013 "flush": false, 00:08:44.013 "reset": true, 00:08:44.013 "nvme_admin": false, 00:08:44.013 "nvme_io": false, 00:08:44.013 "nvme_io_md": false, 00:08:44.013 "write_zeroes": true, 00:08:44.013 "zcopy": false, 00:08:44.013 "get_zone_info": false, 00:08:44.013 "zone_management": false, 00:08:44.013 "zone_append": false, 00:08:44.013 "compare": false, 00:08:44.013 "compare_and_write": false, 00:08:44.013 "abort": false, 00:08:44.013 "seek_hole": true, 00:08:44.013 "seek_data": true, 00:08:44.013 "copy": false, 00:08:44.013 "nvme_iov_md": false 00:08:44.013 }, 00:08:44.013 "driver_specific": { 00:08:44.013 "lvol": { 00:08:44.013 "lvol_store_uuid": "2554edc9-f352-47fc-b737-60a8867dc0f0", 00:08:44.013 "base_bdev": "aio_bdev", 00:08:44.013 "thin_provision": false, 00:08:44.013 "num_allocated_clusters": 38, 00:08:44.013 "snapshot": false, 00:08:44.013 "clone": false, 00:08:44.013 "esnap_clone": false 00:08:44.013 } 00:08:44.013 } 00:08:44.013 } 00:08:44.013 ] 00:08:44.013 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:44.013 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:44.013 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:44.272 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:44.272 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:44.272 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:44.272 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:44.272 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf3961d3-c835-4ec6-bc80-2ce249de2ec1 00:08:44.531 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2554edc9-f352-47fc-b737-60a8867dc0f0 00:08:44.789 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:44.789 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.049 00:08:45.049 real 0m16.698s 00:08:45.049 user 0m43.450s 00:08:45.049 sys 0m3.701s 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.049 ************************************ 00:08:45.049 END TEST lvs_grow_dirty 00:08:45.049 ************************************ 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:45.049 nvmf_trace.0 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.049 rmmod nvme_tcp 00:08:45.049 rmmod nvme_fabrics 00:08:45.049 rmmod nvme_keyring 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3201143 ']' 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3201143 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3201143 ']' 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3201143 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3201143 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3201143' 00:08:45.049 killing process with pid 3201143 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3201143 00:08:45.049 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3201143 00:08:45.308 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:45.308 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.309 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.845 00:08:47.845 real 0m41.004s 00:08:47.845 user 1m4.043s 00:08:47.845 sys 0m9.639s 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 ************************************ 00:08:47.845 END TEST nvmf_lvs_grow 00:08:47.845 ************************************ 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 ************************************ 00:08:47.845 START TEST nvmf_bdev_io_wait 00:08:47.845 ************************************ 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:47.845 * Looking for test storage... 00:08:47.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:47.845 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.846 --rc genhtml_branch_coverage=1 00:08:47.846 --rc genhtml_function_coverage=1 00:08:47.846 --rc genhtml_legend=1 00:08:47.846 --rc geninfo_all_blocks=1 00:08:47.846 --rc geninfo_unexecuted_blocks=1 00:08:47.846 00:08:47.846 ' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.846 --rc genhtml_branch_coverage=1 00:08:47.846 --rc genhtml_function_coverage=1 00:08:47.846 --rc genhtml_legend=1 00:08:47.846 --rc geninfo_all_blocks=1 00:08:47.846 --rc geninfo_unexecuted_blocks=1 00:08:47.846 00:08:47.846 ' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.846 --rc genhtml_branch_coverage=1 00:08:47.846 --rc genhtml_function_coverage=1 00:08:47.846 --rc genhtml_legend=1 00:08:47.846 --rc geninfo_all_blocks=1 00:08:47.846 --rc geninfo_unexecuted_blocks=1 00:08:47.846 00:08:47.846 ' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:47.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.846 --rc genhtml_branch_coverage=1 00:08:47.846 --rc genhtml_function_coverage=1 00:08:47.846 --rc genhtml_legend=1 00:08:47.846 --rc geninfo_all_blocks=1 00:08:47.846 --rc geninfo_unexecuted_blocks=1 00:08:47.846 00:08:47.846 ' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:47.846 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.847 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:53.121 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:53.121 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:53.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:53.122 Found net devices under 0000:af:00.0: cvl_0_0 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:53.122 Found net devices under 0000:af:00.1: cvl_0_1 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:08:53.122 00:08:53.122 --- 10.0.0.2 ping statistics --- 00:08:53.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.122 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:08:53.122 00:08:53.122 --- 10.0.0.1 ping statistics --- 00:08:53.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.122 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3205132 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3205132 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3205132 ']' 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.122 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.122 [2024-12-16 05:37:26.857917] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.122 [2024-12-16 05:37:26.857958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.122 [2024-12-16 05:37:26.916368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.122 [2024-12-16 05:37:26.958144] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.122 [2024-12-16 05:37:26.958184] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.123 [2024-12-16 05:37:26.958192] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.123 [2024-12-16 05:37:26.958197] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.123 [2024-12-16 05:37:26.958202] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.123 [2024-12-16 05:37:26.958246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.123 [2024-12-16 05:37:26.958331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.123 [2024-12-16 05:37:26.958399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.123 [2024-12-16 05:37:26.958399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.382 [2024-12-16 05:37:27.114623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.382 Malloc0 00:08:53.382 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.383 [2024-12-16 05:37:27.177946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3205160 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3205162 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:53.383 { 00:08:53.383 "params": { 00:08:53.383 "name": "Nvme$subsystem", 00:08:53.383 "trtype": "$TEST_TRANSPORT", 00:08:53.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.383 "adrfam": "ipv4", 00:08:53.383 "trsvcid": "$NVMF_PORT", 00:08:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.383 "hdgst": ${hdgst:-false}, 00:08:53.383 "ddgst": ${ddgst:-false} 00:08:53.383 }, 00:08:53.383 "method": "bdev_nvme_attach_controller" 00:08:53.383 } 00:08:53.383 EOF 00:08:53.383 )") 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3205164 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:53.383 { 00:08:53.383 "params": { 00:08:53.383 "name": "Nvme$subsystem", 00:08:53.383 "trtype": "$TEST_TRANSPORT", 00:08:53.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.383 "adrfam": "ipv4", 00:08:53.383 "trsvcid": "$NVMF_PORT", 00:08:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.383 "hdgst": ${hdgst:-false}, 00:08:53.383 "ddgst": ${ddgst:-false} 00:08:53.383 }, 00:08:53.383 "method": "bdev_nvme_attach_controller" 00:08:53.383 } 00:08:53.383 EOF 00:08:53.383 )") 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3205167 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:53.383 { 00:08:53.383 "params": { 00:08:53.383 "name": "Nvme$subsystem", 00:08:53.383 "trtype": "$TEST_TRANSPORT", 00:08:53.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.383 "adrfam": "ipv4", 00:08:53.383 "trsvcid": "$NVMF_PORT", 00:08:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.383 "hdgst": ${hdgst:-false}, 00:08:53.383 "ddgst": ${ddgst:-false} 00:08:53.383 }, 00:08:53.383 "method": "bdev_nvme_attach_controller" 00:08:53.383 } 00:08:53.383 EOF 00:08:53.383 )") 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:53.383 { 00:08:53.383 "params": { 00:08:53.383 "name": "Nvme$subsystem", 00:08:53.383 "trtype": "$TEST_TRANSPORT", 00:08:53.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.383 "adrfam": "ipv4", 00:08:53.383 "trsvcid": "$NVMF_PORT", 00:08:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.383 "hdgst": ${hdgst:-false}, 00:08:53.383 "ddgst": ${ddgst:-false} 00:08:53.383 }, 00:08:53.383 "method": "bdev_nvme_attach_controller" 00:08:53.383 } 00:08:53.383 EOF 00:08:53.383 )") 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3205160 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:53.383 "params": { 00:08:53.383 "name": "Nvme1", 00:08:53.383 "trtype": "tcp", 00:08:53.383 "traddr": "10.0.0.2", 00:08:53.383 "adrfam": "ipv4", 00:08:53.383 "trsvcid": "4420", 00:08:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.383 "hdgst": false, 00:08:53.383 "ddgst": false 00:08:53.383 }, 00:08:53.383 "method": "bdev_nvme_attach_controller" 00:08:53.383 }' 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:53.383 "params": { 00:08:53.383 "name": "Nvme1", 00:08:53.383 "trtype": "tcp", 00:08:53.383 "traddr": "10.0.0.2", 00:08:53.383 "adrfam": "ipv4", 00:08:53.383 "trsvcid": "4420", 00:08:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.383 "hdgst": false, 00:08:53.383 "ddgst": false 00:08:53.383 }, 00:08:53.383 "method": "bdev_nvme_attach_controller" 00:08:53.383 }' 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:53.383 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:53.383 "params": { 00:08:53.383 "name": "Nvme1", 00:08:53.383 "trtype": "tcp", 00:08:53.383 "traddr": "10.0.0.2", 00:08:53.383 "adrfam": "ipv4", 00:08:53.383 "trsvcid": "4420", 00:08:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.383 "hdgst": false, 00:08:53.384 "ddgst": false 00:08:53.384 }, 00:08:53.384 "method": "bdev_nvme_attach_controller" 00:08:53.384 }' 00:08:53.384 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:53.384 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:53.384 "params": { 00:08:53.384 "name": "Nvme1", 00:08:53.384 "trtype": "tcp", 00:08:53.384 "traddr": "10.0.0.2", 00:08:53.384 "adrfam": "ipv4", 00:08:53.384 "trsvcid": "4420", 00:08:53.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.384 "hdgst": false, 00:08:53.384 "ddgst": false 00:08:53.384 }, 00:08:53.384 "method": "bdev_nvme_attach_controller" 00:08:53.384 }' 00:08:53.384 [2024-12-16 05:37:27.230181] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.384 [2024-12-16 05:37:27.230184] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.384 [2024-12-16 05:37:27.230182] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.384 [2024-12-16 05:37:27.230232] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-16 05:37:27.230232] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-12-16 05:37:27.230233] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:53.384 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:53.384 --proc-type=auto ] 00:08:53.384 [2024-12-16 05:37:27.231760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.384 [2024-12-16 05:37:27.231804] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:53.715 [2024-12-16 05:37:27.416677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.715 [2024-12-16 05:37:27.446838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:08:53.715 [2024-12-16 05:37:27.510006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.050 [2024-12-16 05:37:27.541706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:08:54.050 [2024-12-16 05:37:27.612355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.050 [2024-12-16 05:37:27.643924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.050 [2024-12-16 05:37:27.672045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.050 [2024-12-16 05:37:27.699042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:08:54.323 Running I/O for 1 seconds... 00:08:54.323 Running I/O for 1 seconds... 00:08:54.323 Running I/O for 1 seconds... 00:08:54.582 Running I/O for 1 seconds... 00:08:55.150 13487.00 IOPS, 52.68 MiB/s 00:08:55.150 Latency(us) 00:08:55.150 [2024-12-16T04:37:29.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.150 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:55.150 Nvme1n1 : 1.01 13547.53 52.92 0.00 0.00 9418.58 2293.76 10797.84 00:08:55.150 [2024-12-16T04:37:29.006Z] =================================================================================================================== 00:08:55.150 [2024-12-16T04:37:29.006Z] Total : 13547.53 52.92 0.00 0.00 9418.58 2293.76 10797.84 00:08:55.409 10009.00 IOPS, 39.10 MiB/s 00:08:55.409 Latency(us) 00:08:55.409 [2024-12-16T04:37:29.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.409 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:55.409 Nvme1n1 : 1.01 10061.10 39.30 0.00 0.00 12671.43 6491.18 20971.52 00:08:55.409 [2024-12-16T04:37:29.265Z] =================================================================================================================== 00:08:55.409 [2024-12-16T04:37:29.265Z] Total : 10061.10 39.30 0.00 0.00 12671.43 6491.18 20971.52 00:08:55.409 10093.00 IOPS, 39.43 MiB/s 00:08:55.409 Latency(us) 00:08:55.409 [2024-12-16T04:37:29.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.409 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:55.409 Nvme1n1 : 1.01 10177.97 39.76 0.00 0.00 12541.65 3791.73 23842.62 00:08:55.409 [2024-12-16T04:37:29.265Z] =================================================================================================================== 00:08:55.409 [2024-12-16T04:37:29.265Z] Total : 10177.97 39.76 0.00 0.00 12541.65 3791.73 23842.62 00:08:55.409 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3205162 00:08:55.409 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3205164 00:08:55.409 253168.00 IOPS, 988.94 MiB/s 00:08:55.409 Latency(us) 00:08:55.409 [2024-12-16T04:37:29.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.409 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:55.409 Nvme1n1 : 1.00 252789.23 987.46 0.00 0.00 504.16 229.18 1497.97 00:08:55.409 [2024-12-16T04:37:29.265Z] =================================================================================================================== 00:08:55.409 [2024-12-16T04:37:29.265Z] Total : 252789.23 987.46 0.00 0.00 504.16 229.18 1497.97 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3205167 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.668 rmmod nvme_tcp 00:08:55.668 rmmod nvme_fabrics 00:08:55.668 rmmod nvme_keyring 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3205132 ']' 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3205132 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3205132 ']' 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3205132 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.668 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3205132 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3205132' 00:08:55.928 killing process with pid 3205132 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3205132 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3205132 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.928 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.464 00:08:58.464 real 0m10.643s 00:08:58.464 user 0m17.896s 00:08:58.464 sys 0m6.001s 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.464 ************************************ 00:08:58.464 END TEST nvmf_bdev_io_wait 00:08:58.464 ************************************ 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.464 ************************************ 00:08:58.464 START TEST nvmf_queue_depth 00:08:58.464 ************************************ 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:58.464 * Looking for test storage... 00:08:58.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:58.464 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.464 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:58.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.465 --rc genhtml_branch_coverage=1 00:08:58.465 --rc genhtml_function_coverage=1 00:08:58.465 --rc genhtml_legend=1 00:08:58.465 --rc geninfo_all_blocks=1 00:08:58.465 --rc geninfo_unexecuted_blocks=1 00:08:58.465 00:08:58.465 ' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:58.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.465 --rc genhtml_branch_coverage=1 00:08:58.465 --rc genhtml_function_coverage=1 00:08:58.465 --rc genhtml_legend=1 00:08:58.465 --rc geninfo_all_blocks=1 00:08:58.465 --rc geninfo_unexecuted_blocks=1 00:08:58.465 00:08:58.465 ' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:58.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.465 --rc genhtml_branch_coverage=1 00:08:58.465 --rc genhtml_function_coverage=1 00:08:58.465 --rc genhtml_legend=1 00:08:58.465 --rc geninfo_all_blocks=1 00:08:58.465 --rc geninfo_unexecuted_blocks=1 00:08:58.465 00:08:58.465 ' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:58.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.465 --rc genhtml_branch_coverage=1 00:08:58.465 --rc genhtml_function_coverage=1 00:08:58.465 --rc genhtml_legend=1 00:08:58.465 --rc geninfo_all_blocks=1 00:08:58.465 --rc geninfo_unexecuted_blocks=1 00:08:58.465 00:08:58.465 ' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.465 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.739 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.739 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.739 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.739 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.739 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:03.739 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:03.739 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:03.739 Found net devices under 0000:af:00.0: cvl_0_0 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:03.739 Found net devices under 0000:af:00.1: cvl_0_1 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:09:03.739 00:09:03.739 --- 10.0.0.2 ping statistics --- 00:09:03.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.739 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:09:03.739 00:09:03.739 --- 10.0.0.1 ping statistics --- 00:09:03.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.739 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:03.739 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3209102 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3209102 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3209102 ']' 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.740 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.740 [2024-12-16 05:37:37.444689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:03.740 [2024-12-16 05:37:37.444732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.740 [2024-12-16 05:37:37.506638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.740 [2024-12-16 05:37:37.543997] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.740 [2024-12-16 05:37:37.544036] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.740 [2024-12-16 05:37:37.544046] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.740 [2024-12-16 05:37:37.544052] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.740 [2024-12-16 05:37:37.544058] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.740 [2024-12-16 05:37:37.544082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.999 [2024-12-16 05:37:37.672291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.999 Malloc0 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.999 [2024-12-16 05:37:37.722938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3209124 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3209124 /var/tmp/bdevperf.sock 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3209124 ']' 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.999 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.999 [2024-12-16 05:37:37.774881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:03.999 [2024-12-16 05:37:37.774921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209124 ] 00:09:03.999 [2024-12-16 05:37:37.829746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.258 [2024-12-16 05:37:37.868296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.258 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.258 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:04.258 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:04.258 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.258 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:04.258 NVMe0n1 00:09:04.258 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.258 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.516 Running I/O for 10 seconds... 00:09:06.386 11512.00 IOPS, 44.97 MiB/s [2024-12-16T04:37:41.618Z] 11970.00 IOPS, 46.76 MiB/s [2024-12-16T04:37:42.553Z] 12225.33 IOPS, 47.76 MiB/s [2024-12-16T04:37:43.490Z] 12268.75 IOPS, 47.92 MiB/s [2024-12-16T04:37:44.425Z] 12322.80 IOPS, 48.14 MiB/s [2024-12-16T04:37:45.358Z] 12421.67 IOPS, 48.52 MiB/s [2024-12-16T04:37:46.294Z] 12446.43 IOPS, 48.62 MiB/s [2024-12-16T04:37:47.228Z] 12508.62 IOPS, 48.86 MiB/s [2024-12-16T04:37:48.605Z] 12496.11 IOPS, 48.81 MiB/s [2024-12-16T04:37:48.605Z] 12517.60 IOPS, 48.90 MiB/s 00:09:14.749 Latency(us) 00:09:14.749 [2024-12-16T04:37:48.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.750 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:14.750 Verification LBA range: start 0x0 length 0x4000 00:09:14.750 NVMe0n1 : 10.09 12490.37 48.79 0.00 0.00 81367.98 15978.30 57172.36 00:09:14.750 [2024-12-16T04:37:48.606Z] =================================================================================================================== 00:09:14.750 [2024-12-16T04:37:48.606Z] Total : 12490.37 48.79 0.00 0.00 81367.98 15978.30 57172.36 00:09:14.750 { 00:09:14.750 "results": [ 00:09:14.750 { 00:09:14.750 "job": "NVMe0n1", 00:09:14.750 "core_mask": "0x1", 00:09:14.750 "workload": "verify", 00:09:14.750 "status": "finished", 00:09:14.750 "verify_range": { 00:09:14.750 "start": 0, 00:09:14.750 "length": 16384 00:09:14.750 }, 00:09:14.750 "queue_depth": 1024, 00:09:14.750 "io_size": 4096, 00:09:14.750 "runtime": 10.092898, 00:09:14.750 "iops": 12490.36698874793, 00:09:14.750 "mibps": 48.7904960497966, 00:09:14.750 "io_failed": 0, 00:09:14.750 "io_timeout": 0, 00:09:14.750 "avg_latency_us": 81367.97835258281, 00:09:14.750 "min_latency_us": 15978.300952380952, 00:09:14.750 "max_latency_us": 57172.358095238094 00:09:14.750 } 00:09:14.750 ], 00:09:14.750 "core_count": 1 00:09:14.750 } 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3209124 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3209124 ']' 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3209124 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3209124 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3209124' 00:09:14.750 killing process with pid 3209124 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3209124 00:09:14.750 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.750 00:09:14.750 Latency(us) 00:09:14.750 [2024-12-16T04:37:48.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.750 [2024-12-16T04:37:48.606Z] =================================================================================================================== 00:09:14.750 [2024-12-16T04:37:48.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3209124 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.750 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.750 rmmod nvme_tcp 00:09:14.750 rmmod nvme_fabrics 00:09:14.750 rmmod nvme_keyring 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3209102 ']' 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3209102 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3209102 ']' 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3209102 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3209102 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3209102' 00:09:15.009 killing process with pid 3209102 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3209102 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3209102 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:15.009 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.268 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.172 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.172 00:09:17.172 real 0m19.060s 00:09:17.172 user 0m22.793s 00:09:17.172 sys 0m5.468s 00:09:17.172 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.172 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.172 ************************************ 00:09:17.172 END TEST nvmf_queue_depth 00:09:17.172 ************************************ 00:09:17.172 05:37:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:17.172 05:37:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:17.172 05:37:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.172 05:37:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.172 ************************************ 00:09:17.172 START TEST nvmf_target_multipath 00:09:17.172 ************************************ 00:09:17.172 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:17.431 * Looking for test storage... 00:09:17.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.431 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.432 --rc genhtml_branch_coverage=1 00:09:17.432 --rc genhtml_function_coverage=1 00:09:17.432 --rc genhtml_legend=1 00:09:17.432 --rc geninfo_all_blocks=1 00:09:17.432 --rc geninfo_unexecuted_blocks=1 00:09:17.432 00:09:17.432 ' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.432 --rc genhtml_branch_coverage=1 00:09:17.432 --rc genhtml_function_coverage=1 00:09:17.432 --rc genhtml_legend=1 00:09:17.432 --rc geninfo_all_blocks=1 00:09:17.432 --rc geninfo_unexecuted_blocks=1 00:09:17.432 00:09:17.432 ' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.432 --rc genhtml_branch_coverage=1 00:09:17.432 --rc genhtml_function_coverage=1 00:09:17.432 --rc genhtml_legend=1 00:09:17.432 --rc geninfo_all_blocks=1 00:09:17.432 --rc geninfo_unexecuted_blocks=1 00:09:17.432 00:09:17.432 ' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.432 --rc genhtml_branch_coverage=1 00:09:17.432 --rc genhtml_function_coverage=1 00:09:17.432 --rc genhtml_legend=1 00:09:17.432 --rc geninfo_all_blocks=1 00:09:17.432 --rc geninfo_unexecuted_blocks=1 00:09:17.432 00:09:17.432 ' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:17.432 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:22.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:22.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:22.705 Found net devices under 0000:af:00.0: cvl_0_0 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.705 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:22.706 Found net devices under 0000:af:00.1: cvl_0_1 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.706 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:09:22.965 00:09:22.965 --- 10.0.0.2 ping statistics --- 00:09:22.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.965 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:22.965 00:09:22.965 --- 10.0.0.1 ping statistics --- 00:09:22.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.965 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:22.965 only one NIC for nvmf test 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.965 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.965 rmmod nvme_tcp 00:09:23.224 rmmod nvme_fabrics 00:09:23.224 rmmod nvme_keyring 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.224 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.128 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.129 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.129 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.129 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.129 00:09:25.129 real 0m7.951s 00:09:25.129 user 0m1.799s 00:09:25.129 sys 0m4.171s 00:09:25.129 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.129 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.129 ************************************ 00:09:25.129 END TEST nvmf_target_multipath 00:09:25.129 ************************************ 00:09:25.388 05:37:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.388 05:37:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.388 05:37:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.388 05:37:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.388 ************************************ 00:09:25.388 START TEST nvmf_zcopy 00:09:25.388 ************************************ 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.388 * Looking for test storage... 00:09:25.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.388 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:25.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.389 --rc genhtml_branch_coverage=1 00:09:25.389 --rc genhtml_function_coverage=1 00:09:25.389 --rc genhtml_legend=1 00:09:25.389 --rc geninfo_all_blocks=1 00:09:25.389 --rc geninfo_unexecuted_blocks=1 00:09:25.389 00:09:25.389 ' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:25.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.389 --rc genhtml_branch_coverage=1 00:09:25.389 --rc genhtml_function_coverage=1 00:09:25.389 --rc genhtml_legend=1 00:09:25.389 --rc geninfo_all_blocks=1 00:09:25.389 --rc geninfo_unexecuted_blocks=1 00:09:25.389 00:09:25.389 ' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:25.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.389 --rc genhtml_branch_coverage=1 00:09:25.389 --rc genhtml_function_coverage=1 00:09:25.389 --rc genhtml_legend=1 00:09:25.389 --rc geninfo_all_blocks=1 00:09:25.389 --rc geninfo_unexecuted_blocks=1 00:09:25.389 00:09:25.389 ' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:25.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.389 --rc genhtml_branch_coverage=1 00:09:25.389 --rc genhtml_function_coverage=1 00:09:25.389 --rc genhtml_legend=1 00:09:25.389 --rc geninfo_all_blocks=1 00:09:25.389 --rc geninfo_unexecuted_blocks=1 00:09:25.389 00:09:25.389 ' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:25.389 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:25.647 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.647 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:30.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:30.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:30.919 Found net devices under 0000:af:00.0: cvl_0_0 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:30.919 Found net devices under 0000:af:00.1: cvl_0_1 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.919 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:09:31.179 00:09:31.179 --- 10.0.0.2 ping statistics --- 00:09:31.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.179 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:09:31.179 00:09:31.179 --- 10.0.0.1 ping statistics --- 00:09:31.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.179 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3217864 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3217864 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3217864 ']' 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.179 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.179 [2024-12-16 05:38:04.919287] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:31.179 [2024-12-16 05:38:04.919327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.179 [2024-12-16 05:38:04.977356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.179 [2024-12-16 05:38:05.015512] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.179 [2024-12-16 05:38:05.015550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.179 [2024-12-16 05:38:05.015559] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.179 [2024-12-16 05:38:05.015567] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.179 [2024-12-16 05:38:05.015573] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.179 [2024-12-16 05:38:05.015598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 [2024-12-16 05:38:05.148813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 [2024-12-16 05:38:05.173023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.440 malloc0 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.440 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:31.441 { 00:09:31.441 "params": { 00:09:31.441 "name": "Nvme$subsystem", 00:09:31.441 "trtype": "$TEST_TRANSPORT", 00:09:31.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.441 "adrfam": "ipv4", 00:09:31.441 "trsvcid": "$NVMF_PORT", 00:09:31.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.441 "hdgst": ${hdgst:-false}, 00:09:31.441 "ddgst": ${ddgst:-false} 00:09:31.441 }, 00:09:31.441 "method": "bdev_nvme_attach_controller" 00:09:31.441 } 00:09:31.441 EOF 00:09:31.441 )") 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:31.441 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:31.441 "params": { 00:09:31.441 "name": "Nvme1", 00:09:31.441 "trtype": "tcp", 00:09:31.441 "traddr": "10.0.0.2", 00:09:31.441 "adrfam": "ipv4", 00:09:31.441 "trsvcid": "4420", 00:09:31.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.441 "hdgst": false, 00:09:31.441 "ddgst": false 00:09:31.441 }, 00:09:31.441 "method": "bdev_nvme_attach_controller" 00:09:31.441 }' 00:09:31.441 [2024-12-16 05:38:05.265935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:31.441 [2024-12-16 05:38:05.265979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217885 ] 00:09:31.698 [2024-12-16 05:38:05.320028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.698 [2024-12-16 05:38:05.359091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.957 Running I/O for 10 seconds... 00:09:33.828 8647.00 IOPS, 67.55 MiB/s [2024-12-16T04:38:09.061Z] 8714.50 IOPS, 68.08 MiB/s [2024-12-16T04:38:09.997Z] 8741.33 IOPS, 68.29 MiB/s [2024-12-16T04:38:10.934Z] 8766.00 IOPS, 68.48 MiB/s [2024-12-16T04:38:11.869Z] 8751.40 IOPS, 68.37 MiB/s [2024-12-16T04:38:12.805Z] 8759.33 IOPS, 68.43 MiB/s [2024-12-16T04:38:13.741Z] 8766.14 IOPS, 68.49 MiB/s [2024-12-16T04:38:14.677Z] 8769.75 IOPS, 68.51 MiB/s [2024-12-16T04:38:16.052Z] 8766.00 IOPS, 68.48 MiB/s [2024-12-16T04:38:16.052Z] 8769.20 IOPS, 68.51 MiB/s 00:09:42.196 Latency(us) 00:09:42.196 [2024-12-16T04:38:16.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.196 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:42.196 Verification LBA range: start 0x0 length 0x1000 00:09:42.196 Nvme1n1 : 10.01 8773.44 68.54 0.00 0.00 14548.53 1919.27 22843.98 00:09:42.196 [2024-12-16T04:38:16.052Z] =================================================================================================================== 00:09:42.196 [2024-12-16T04:38:16.052Z] Total : 8773.44 68.54 0.00 0.00 14548.53 1919.27 22843.98 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3219669 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:42.196 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:42.196 { 00:09:42.196 "params": { 00:09:42.196 "name": "Nvme$subsystem", 00:09:42.196 "trtype": "$TEST_TRANSPORT", 00:09:42.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.196 "adrfam": "ipv4", 00:09:42.196 "trsvcid": "$NVMF_PORT", 00:09:42.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.196 "hdgst": ${hdgst:-false}, 00:09:42.196 "ddgst": ${ddgst:-false} 00:09:42.196 }, 00:09:42.196 "method": "bdev_nvme_attach_controller" 00:09:42.197 } 00:09:42.197 EOF 00:09:42.197 )") 00:09:42.197 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:42.197 [2024-12-16 05:38:15.851435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.851472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:42.197 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:42.197 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:42.197 "params": { 00:09:42.197 "name": "Nvme1", 00:09:42.197 "trtype": "tcp", 00:09:42.197 "traddr": "10.0.0.2", 00:09:42.197 "adrfam": "ipv4", 00:09:42.197 "trsvcid": "4420", 00:09:42.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.197 "hdgst": false, 00:09:42.197 "ddgst": false 00:09:42.197 }, 00:09:42.197 "method": "bdev_nvme_attach_controller" 00:09:42.197 }' 00:09:42.197 [2024-12-16 05:38:15.863447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.863467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.875466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.875479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.887497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.887513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.887704] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:42.197 [2024-12-16 05:38:15.887743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219669 ] 00:09:42.197 [2024-12-16 05:38:15.899536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.899553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.911560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.911571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.923594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.923606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.935626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.935638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.942025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.197 [2024-12-16 05:38:15.947659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.947672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.959697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.959713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.971727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.971750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.981141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.197 [2024-12-16 05:38:15.983760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.983774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:15.995797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:15.995814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:16.007824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:16.007842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:16.019855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:16.019871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:16.031886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:16.031901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.197 [2024-12-16 05:38:16.043917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.197 [2024-12-16 05:38:16.043933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.055948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.055962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.067992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.068012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.080017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.080033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.092044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.092060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.104086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.104097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.116109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.116121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.128150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.128161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.140178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.140192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.152208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.152222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.164240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.164251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.176272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.176283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.188309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.188324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.200338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.200352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.212365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.212375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.224399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.224409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.236434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.236447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.248472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.248489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 Running I/O for 5 seconds... 00:09:42.457 [2024-12-16 05:38:16.260501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.260514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.276457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.276477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.290392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.290411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.457 [2024-12-16 05:38:16.304477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.457 [2024-12-16 05:38:16.304497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.318662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.318687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.332394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.332413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.346438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.346457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.360236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.360256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.374417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.374437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.388263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.388281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.402277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.402296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.416304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.416323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.429804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.429823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.443301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.443320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.456775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.456794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.470801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.470822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.484511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.484531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.498155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.498175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.511840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.716 [2024-12-16 05:38:16.511868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.716 [2024-12-16 05:38:16.525857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.717 [2024-12-16 05:38:16.525876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.717 [2024-12-16 05:38:16.539838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.717 [2024-12-16 05:38:16.539865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.717 [2024-12-16 05:38:16.553762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.717 [2024-12-16 05:38:16.553781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.717 [2024-12-16 05:38:16.567703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.717 [2024-12-16 05:38:16.567722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.581758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.581786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.595729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.595748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.609438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.609457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.623714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.623734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.637523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.637542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.651314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.651334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.665351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.665372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.679148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.679169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.693373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.693393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.704549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.704569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.718181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.718200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.731893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.731914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.745662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.745682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.759593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.759613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.773011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.773031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.975 [2024-12-16 05:38:16.786976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.975 [2024-12-16 05:38:16.786996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.976 [2024-12-16 05:38:16.800734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.976 [2024-12-16 05:38:16.800754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.976 [2024-12-16 05:38:16.814471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.976 [2024-12-16 05:38:16.814491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.976 [2024-12-16 05:38:16.828119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.976 [2024-12-16 05:38:16.828139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.841818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.841843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.855136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.855154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.868760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.868779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.882171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.882190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.896173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.896192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.909923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.909942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.923964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.923983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.937547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.937566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.951461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.951479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.965635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.965654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.979511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.979530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:16.992997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:16.993016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:17.006768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:17.006787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:17.020928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:17.020946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:17.031856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:17.031875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:17.046417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:17.046436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:17.060463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:17.060483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:17.074203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:17.074223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.235 [2024-12-16 05:38:17.087857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.235 [2024-12-16 05:38:17.087876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.101960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.101980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.113283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.113302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.127600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.127619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.137092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.137111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.151285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.151304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.165644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.165663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.179597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.179617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.193134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.193153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.207332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.207351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.221106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.221124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.235177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.235196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.249137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.249155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 16770.00 IOPS, 131.02 MiB/s [2024-12-16T04:38:17.350Z] [2024-12-16 05:38:17.263049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.263067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.276791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.276810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.290443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.290462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.304126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.304145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.317811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.317830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.331690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.331709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.494 [2024-12-16 05:38:17.345880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.494 [2024-12-16 05:38:17.345899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.361432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.361452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.375310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.375329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.389058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.389078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.403117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.403137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.416645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.416664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.430645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.430663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.444292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.444310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.458213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.458232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.472134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.472152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.486192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.486211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.499738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.499757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.513826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.513845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.527798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.527816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.541689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.541708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.555430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.555450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.568816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.568835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.582495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.582513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.754 [2024-12-16 05:38:17.596447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.754 [2024-12-16 05:38:17.596465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.610544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.610567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.619566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.619590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.633783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.633802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.647288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.647307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.656400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.656419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.665757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.665776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.680055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.680085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.693645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.693664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.707426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.707445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.721184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.721203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.734973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.734992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.748889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.748908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.762734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.762753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.776680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.776699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.790264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.790283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.804051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.804070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.818201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.818220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.831962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.831982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.845879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.013 [2024-12-16 05:38:17.845899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.013 [2024-12-16 05:38:17.859414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.014 [2024-12-16 05:38:17.859438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.272 [2024-12-16 05:38:17.873137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.272 [2024-12-16 05:38:17.873159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.272 [2024-12-16 05:38:17.887511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.272 [2024-12-16 05:38:17.887532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.272 [2024-12-16 05:38:17.901102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.272 [2024-12-16 05:38:17.901122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.272 [2024-12-16 05:38:17.915394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.272 [2024-12-16 05:38:17.915414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.272 [2024-12-16 05:38:17.929266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.272 [2024-12-16 05:38:17.929286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.272 [2024-12-16 05:38:17.943217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:17.943237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:17.956938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:17.956958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:17.970763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:17.970783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:17.984570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:17.984589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:17.998466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:17.998486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.012387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.012407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.026007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.026028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.040129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.040150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.053709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.053729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.067328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.067347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.081277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.081297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.094870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.094890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.108338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.108357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.273 [2024-12-16 05:38:18.121808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.273 [2024-12-16 05:38:18.121832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.135768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.135789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.149049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.149068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.162719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.162739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.176653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.176672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.190421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.190441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.204036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.204056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.217450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.217469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.231313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.231332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.245097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.245116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.259197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.259216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 16860.00 IOPS, 131.72 MiB/s [2024-12-16T04:38:18.388Z] [2024-12-16 05:38:18.269907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.269926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.284118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.284137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.297644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.297663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.311518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.311537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.324961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.324980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.338933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.338952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.352985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.353005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.366827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.366852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.532 [2024-12-16 05:38:18.380527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.532 [2024-12-16 05:38:18.380545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.394609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.394628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.405611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.405630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.420105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.420124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.434165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.434188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.448252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.448270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.462044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.462063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.475881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.475901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.489566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.489585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.503247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.503266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.517550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.517569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.531635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.531654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.545200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.545219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.558800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.558818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.572952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.572971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.586680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.586699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.600751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.600769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.614425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.614444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.627949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.627968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.794 [2024-12-16 05:38:18.641430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.794 [2024-12-16 05:38:18.641448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.650282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.650302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.664400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.664419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.677864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.677882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.691901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.691921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.701020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.701039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.715208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.715227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.728752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.728772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.742309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.742328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.756189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.105 [2024-12-16 05:38:18.756207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.105 [2024-12-16 05:38:18.770017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.770036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.783499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.783518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.797552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.797570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.810992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.811016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.824940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.824958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.838721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.838739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.852431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.852450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.866384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.866403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.880048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.880068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.894031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.894050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.907894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.907912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.921953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.921972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.935979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.935999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.106 [2024-12-16 05:38:18.949542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.106 [2024-12-16 05:38:18.949561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:18.963381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:18.963401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:18.977160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:18.977179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:18.991070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:18.991091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.005094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.005113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.018656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.018675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.032784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.032804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.046426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.046444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.059985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.060004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.073982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.074001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.087557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.087576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.101456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.101475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.115226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.115246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.129088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.129109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.142920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.142939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.157126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.157146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.167796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.167815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.182002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.182021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.195550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.195569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.209518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.209538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.223288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.223308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.236844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.236872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.250695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.250715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 16878.00 IOPS, 131.86 MiB/s [2024-12-16T04:38:19.325Z] [2024-12-16 05:38:19.264593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.264612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.278483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.278502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.292362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.292381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.469 [2024-12-16 05:38:19.306040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.469 [2024-12-16 05:38:19.306060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.320096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.320116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.333823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.333843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.347716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.347734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.361738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.361758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.372505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.372525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.386754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.386774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.400711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.400736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.414657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.414677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.428725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.428744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.442726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.442745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.456423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.456442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.470501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.470521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.484168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.484188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.497726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.497746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.511453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.511473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.525422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.525441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.539604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.539625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.553390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.553408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.567431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.567450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.581144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.581162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.745 [2024-12-16 05:38:19.595392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.745 [2024-12-16 05:38:19.595410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.004 [2024-12-16 05:38:19.609239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.004 [2024-12-16 05:38:19.609259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.004 [2024-12-16 05:38:19.623067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.004 [2024-12-16 05:38:19.623086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.004 [2024-12-16 05:38:19.636734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.004 [2024-12-16 05:38:19.636753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.004 [2024-12-16 05:38:19.650840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.650866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.664648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.664671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.678784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.678803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.689722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.689740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.703793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.703812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.717540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.717559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.731632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.731651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.742434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.742452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.756691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.756710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.770340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.770359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.784012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.784031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.797791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.797810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.811551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.811570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.825666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.825685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.839932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.839951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.005 [2024-12-16 05:38:19.853739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.005 [2024-12-16 05:38:19.853758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.867446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.867465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.881373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.881392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.895568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.895586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.906683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.906702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.920657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.920681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.934179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.934199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.947819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.947838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.961456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.961475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.974819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.974838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:19.988710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:19.988729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.002253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.002273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.017898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.017919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.032325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.032345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.046430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.046450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.061873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.061893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.076855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.076875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.090840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.090865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.104958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.104977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.264 [2024-12-16 05:38:20.118622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.264 [2024-12-16 05:38:20.118641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.132760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.132782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.146546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.146566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.160789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.160809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.171842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.171866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.186378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.186397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.200278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.200297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.214028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.214047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.228080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.228100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.241837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.241861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.255350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.255369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 16858.50 IOPS, 131.71 MiB/s [2024-12-16T04:38:20.380Z] [2024-12-16 05:38:20.269130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.269149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.283063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.283082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.296838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.296863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.310903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.310923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.324947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.324966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.338792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.338811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.352511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.352529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.524 [2024-12-16 05:38:20.366632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.524 [2024-12-16 05:38:20.366651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.380746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.380766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.394547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.394566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.408425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.408445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.422297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.422316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.436578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.436597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.450829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.450854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.466248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.466267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.480912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.480932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.494638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.494657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.508467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.508486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.522436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.522459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.536397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.536416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.550350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.550368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.563981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.564001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.578019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.578040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.591955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.591975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.605503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.605524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.619511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.619532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.783 [2024-12-16 05:38:20.628828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.783 [2024-12-16 05:38:20.628856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.642804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.042 [2024-12-16 05:38:20.642825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.656503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.042 [2024-12-16 05:38:20.656523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.670662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.042 [2024-12-16 05:38:20.670681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.684888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.042 [2024-12-16 05:38:20.684908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.699490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.042 [2024-12-16 05:38:20.699509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.715025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.042 [2024-12-16 05:38:20.715045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.729019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.042 [2024-12-16 05:38:20.729040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.042 [2024-12-16 05:38:20.743124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.743143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.756687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.756706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.770697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.770717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.784732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.784751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.798475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.798494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.812616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.812635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.823382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.823402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.837724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.837743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.851677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.851697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.865779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.865799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.879844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.879870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.043 [2024-12-16 05:38:20.893927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.043 [2024-12-16 05:38:20.893947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:20.904993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:20.905014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:20.919018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:20.919038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:20.933278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:20.933297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:20.947297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:20.947316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:20.961019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:20.961042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:20.974780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:20.974799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:20.988530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:20.988550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.002232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.002252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.015953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.015976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.029653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.029672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.043196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.043215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.056779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.056799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.070312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.070330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.084061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.084080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.097704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.097723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.111380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.111399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.125120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.125138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.138829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.138855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.302 [2024-12-16 05:38:21.152771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.302 [2024-12-16 05:38:21.152790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.166642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.166662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.180863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.180883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.191645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.191664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.206088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.206107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.219590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.219613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.233284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.233303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.246934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.246954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.260521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.260540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 16859.80 IOPS, 131.72 MiB/s [2024-12-16T04:38:21.418Z] [2024-12-16 05:38:21.273262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.273281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 00:09:47.562 Latency(us) 00:09:47.562 [2024-12-16T04:38:21.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.562 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:47.562 Nvme1n1 : 5.01 16861.33 131.73 0.00 0.00 7584.17 3526.46 17601.10 00:09:47.562 [2024-12-16T04:38:21.418Z] =================================================================================================================== 00:09:47.562 [2024-12-16T04:38:21.418Z] Total : 16861.33 131.73 0.00 0.00 7584.17 3526.46 17601.10 00:09:47.562 [2024-12-16 05:38:21.282687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.282704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.294735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.294752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.306761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.306778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.318786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.318805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.330822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.330841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.342849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.342864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.354896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.354910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.366927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.366942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.378943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.378958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.390971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.390982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.403012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.403026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.562 [2024-12-16 05:38:21.415036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.562 [2024-12-16 05:38:21.415048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.821 [2024-12-16 05:38:21.427069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.821 [2024-12-16 05:38:21.427084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.821 [2024-12-16 05:38:21.439113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.821 [2024-12-16 05:38:21.439126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.821 [2024-12-16 05:38:21.451134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.821 [2024-12-16 05:38:21.451145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3219669) - No such process 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3219669 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.821 delay0 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.821 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:47.821 [2024-12-16 05:38:21.583142] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:54.386 Initializing NVMe Controllers 00:09:54.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:54.386 Initialization complete. Launching workers. 00:09:54.386 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1217 00:09:54.386 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1483, failed to submit 54 00:09:54.386 success 1314, unsuccessful 169, failed 0 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.386 rmmod nvme_tcp 00:09:54.386 rmmod nvme_fabrics 00:09:54.386 rmmod nvme_keyring 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:54.386 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3217864 ']' 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3217864 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3217864 ']' 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3217864 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3217864 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3217864' 00:09:54.387 killing process with pid 3217864 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3217864 00:09:54.387 05:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3217864 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.387 05:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.926 00:09:56.926 real 0m31.119s 00:09:56.926 user 0m41.932s 00:09:56.926 sys 0m10.867s 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.926 ************************************ 00:09:56.926 END TEST nvmf_zcopy 00:09:56.926 ************************************ 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.926 ************************************ 00:09:56.926 START TEST nvmf_nmic 00:09:56.926 ************************************ 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.926 * Looking for test storage... 00:09:56.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.926 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.927 --rc genhtml_branch_coverage=1 00:09:56.927 --rc genhtml_function_coverage=1 00:09:56.927 --rc genhtml_legend=1 00:09:56.927 --rc geninfo_all_blocks=1 00:09:56.927 --rc geninfo_unexecuted_blocks=1 00:09:56.927 00:09:56.927 ' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.927 --rc genhtml_branch_coverage=1 00:09:56.927 --rc genhtml_function_coverage=1 00:09:56.927 --rc genhtml_legend=1 00:09:56.927 --rc geninfo_all_blocks=1 00:09:56.927 --rc geninfo_unexecuted_blocks=1 00:09:56.927 00:09:56.927 ' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.927 --rc genhtml_branch_coverage=1 00:09:56.927 --rc genhtml_function_coverage=1 00:09:56.927 --rc genhtml_legend=1 00:09:56.927 --rc geninfo_all_blocks=1 00:09:56.927 --rc geninfo_unexecuted_blocks=1 00:09:56.927 00:09:56.927 ' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.927 --rc genhtml_branch_coverage=1 00:09:56.927 --rc genhtml_function_coverage=1 00:09:56.927 --rc genhtml_legend=1 00:09:56.927 --rc geninfo_all_blocks=1 00:09:56.927 --rc geninfo_unexecuted_blocks=1 00:09:56.927 00:09:56.927 ' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.927 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.200 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:02.201 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:02.201 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:02.201 Found net devices under 0000:af:00.0: cvl_0_0 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:02.201 Found net devices under 0000:af:00.1: cvl_0_1 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:10:02.201 00:10:02.201 --- 10.0.0.2 ping statistics --- 00:10:02.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.201 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:10:02.201 00:10:02.201 --- 10.0.0.1 ping statistics --- 00:10:02.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.201 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:02.201 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3225153 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3225153 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3225153 ']' 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.202 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.202 [2024-12-16 05:38:35.941226] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:02.202 [2024-12-16 05:38:35.941268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.202 [2024-12-16 05:38:35.998769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.202 [2024-12-16 05:38:36.039957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.202 [2024-12-16 05:38:36.039995] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.202 [2024-12-16 05:38:36.040002] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.202 [2024-12-16 05:38:36.040008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.202 [2024-12-16 05:38:36.040013] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.202 [2024-12-16 05:38:36.040051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.202 [2024-12-16 05:38:36.040134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.202 [2024-12-16 05:38:36.040200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.202 [2024-12-16 05:38:36.040201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 [2024-12-16 05:38:36.184752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 Malloc0 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 [2024-12-16 05:38:36.236171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:02.461 test case1: single bdev can't be used in multiple subsystems 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:02.461 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.462 [2024-12-16 05:38:36.264084] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:02.462 [2024-12-16 05:38:36.264103] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:02.462 [2024-12-16 05:38:36.264110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.462 request: 00:10:02.462 { 00:10:02.462 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:02.462 "namespace": { 00:10:02.462 "bdev_name": "Malloc0", 00:10:02.462 "no_auto_visible": false 00:10:02.462 }, 00:10:02.462 "method": "nvmf_subsystem_add_ns", 00:10:02.462 "req_id": 1 00:10:02.462 } 00:10:02.462 Got JSON-RPC error response 00:10:02.462 response: 00:10:02.462 { 00:10:02.462 "code": -32602, 00:10:02.462 "message": "Invalid parameters" 00:10:02.462 } 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:02.462 Adding namespace failed - expected result. 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:02.462 test case2: host connect to nvmf target in multiple paths 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.462 [2024-12-16 05:38:36.276231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.462 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.838 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:04.775 05:38:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.775 05:38:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:04.775 05:38:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.775 05:38:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:04.775 05:38:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:07.310 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:07.310 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:07.310 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.310 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:07.310 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.310 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:07.310 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.310 [global] 00:10:07.310 thread=1 00:10:07.310 invalidate=1 00:10:07.310 rw=write 00:10:07.310 time_based=1 00:10:07.310 runtime=1 00:10:07.310 ioengine=libaio 00:10:07.310 direct=1 00:10:07.310 bs=4096 00:10:07.310 iodepth=1 00:10:07.310 norandommap=0 00:10:07.310 numjobs=1 00:10:07.310 00:10:07.310 verify_dump=1 00:10:07.310 verify_backlog=512 00:10:07.310 verify_state_save=0 00:10:07.310 do_verify=1 00:10:07.310 verify=crc32c-intel 00:10:07.310 [job0] 00:10:07.310 filename=/dev/nvme0n1 00:10:07.310 Could not set queue depth (nvme0n1) 00:10:07.310 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.310 fio-3.35 00:10:07.310 Starting 1 thread 00:10:08.245 00:10:08.245 job0: (groupid=0, jobs=1): err= 0: pid=3225992: Mon Dec 16 05:38:42 2024 00:10:08.245 read: IOPS=1122, BW=4491KiB/s (4599kB/s)(4612KiB/1027msec) 00:10:08.245 slat (nsec): min=6961, max=29582, avg=8061.60, stdev=1948.09 00:10:08.245 clat (usec): min=168, max=42073, avg=625.03, stdev=4031.91 00:10:08.245 lat (usec): min=176, max=42095, avg=633.09, stdev=4033.26 00:10:08.245 clat percentiles (usec): 00:10:08.245 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:10:08.245 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 237], 00:10:08.245 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:08.245 | 99.00th=[ 478], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:08.245 | 99.99th=[42206] 00:10:08.245 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:10:08.245 slat (usec): min=10, max=30156, avg=31.18, stdev=769.17 00:10:08.245 clat (usec): min=116, max=369, avg=156.99, stdev=21.10 00:10:08.245 lat (usec): min=127, max=30434, avg=188.17, stdev=772.55 00:10:08.245 clat percentiles (usec): 00:10:08.245 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 135], 00:10:08.245 | 30.00th=[ 139], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 167], 00:10:08.245 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 184], 00:10:08.245 | 99.00th=[ 194], 99.50th=[ 215], 99.90th=[ 318], 99.95th=[ 371], 00:10:08.245 | 99.99th=[ 371] 00:10:08.245 bw ( KiB/s): min= 1704, max=10584, per=100.00%, avg=6144.00, stdev=6279.11, samples=2 00:10:08.245 iops : min= 426, max= 2646, avg=1536.00, stdev=1569.78, samples=2 00:10:08.246 lat (usec) : 250=85.46%, 500=14.13% 00:10:08.246 lat (msec) : 50=0.41% 00:10:08.246 cpu : usr=2.53%, sys=3.70%, ctx=2692, majf=0, minf=1 00:10:08.246 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.246 issued rwts: total=1153,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.246 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.246 00:10:08.246 Run status group 0 (all jobs): 00:10:08.246 READ: bw=4491KiB/s (4599kB/s), 4491KiB/s-4491KiB/s (4599kB/s-4599kB/s), io=4612KiB (4723kB), run=1027-1027msec 00:10:08.246 WRITE: bw=5982KiB/s (6126kB/s), 5982KiB/s-5982KiB/s (6126kB/s-6126kB/s), io=6144KiB (6291kB), run=1027-1027msec 00:10:08.246 00:10:08.246 Disk stats (read/write): 00:10:08.246 nvme0n1: ios=1174/1536, merge=0/0, ticks=1490/229, in_queue=1719, util=98.90% 00:10:08.246 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.504 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.504 rmmod nvme_tcp 00:10:08.504 rmmod nvme_fabrics 00:10:08.504 rmmod nvme_keyring 00:10:08.763 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.763 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:08.763 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:08.763 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3225153 ']' 00:10:08.763 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3225153 00:10:08.763 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3225153 ']' 00:10:08.763 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3225153 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3225153 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3225153' 00:10:08.764 killing process with pid 3225153 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3225153 00:10:08.764 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3225153 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.022 05:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.925 00:10:10.925 real 0m14.480s 00:10:10.925 user 0m33.284s 00:10:10.925 sys 0m4.939s 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.925 ************************************ 00:10:10.925 END TEST nvmf_nmic 00:10:10.925 ************************************ 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.925 ************************************ 00:10:10.925 START TEST nvmf_fio_target 00:10:10.925 ************************************ 00:10:10.925 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:11.185 * Looking for test storage... 00:10:11.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.185 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.185 --rc genhtml_branch_coverage=1 00:10:11.185 --rc genhtml_function_coverage=1 00:10:11.185 --rc genhtml_legend=1 00:10:11.186 --rc geninfo_all_blocks=1 00:10:11.186 --rc geninfo_unexecuted_blocks=1 00:10:11.186 00:10:11.186 ' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.186 --rc genhtml_branch_coverage=1 00:10:11.186 --rc genhtml_function_coverage=1 00:10:11.186 --rc genhtml_legend=1 00:10:11.186 --rc geninfo_all_blocks=1 00:10:11.186 --rc geninfo_unexecuted_blocks=1 00:10:11.186 00:10:11.186 ' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.186 --rc genhtml_branch_coverage=1 00:10:11.186 --rc genhtml_function_coverage=1 00:10:11.186 --rc genhtml_legend=1 00:10:11.186 --rc geninfo_all_blocks=1 00:10:11.186 --rc geninfo_unexecuted_blocks=1 00:10:11.186 00:10:11.186 ' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.186 --rc genhtml_branch_coverage=1 00:10:11.186 --rc genhtml_function_coverage=1 00:10:11.186 --rc genhtml_legend=1 00:10:11.186 --rc geninfo_all_blocks=1 00:10:11.186 --rc geninfo_unexecuted_blocks=1 00:10:11.186 00:10:11.186 ' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.186 05:38:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:16.501 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:16.502 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:16.502 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:16.502 Found net devices under 0000:af:00.0: cvl_0_0 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:16.502 Found net devices under 0000:af:00.1: cvl_0_1 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.502 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:10:16.762 00:10:16.762 --- 10.0.0.2 ping statistics --- 00:10:16.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.762 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:10:16.762 00:10:16.762 --- 10.0.0.1 ping statistics --- 00:10:16.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.762 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3229689 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3229689 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3229689 ']' 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.762 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.762 [2024-12-16 05:38:50.507711] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:16.762 [2024-12-16 05:38:50.507765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.762 [2024-12-16 05:38:50.568376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.762 [2024-12-16 05:38:50.609954] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.762 [2024-12-16 05:38:50.609995] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.762 [2024-12-16 05:38:50.610003] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.762 [2024-12-16 05:38:50.610009] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.762 [2024-12-16 05:38:50.610014] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.762 [2024-12-16 05:38:50.610061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.762 [2024-12-16 05:38:50.610140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.762 [2024-12-16 05:38:50.610229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.762 [2024-12-16 05:38:50.610230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.021 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.021 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:17.021 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:17.021 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.021 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.021 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.021 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:17.280 [2024-12-16 05:38:50.950749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.280 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.539 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:17.540 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.798 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:17.798 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.798 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:17.798 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.057 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:18.057 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:18.316 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.574 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:18.574 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.833 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:18.833 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.833 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:18.833 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:19.092 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.350 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:19.350 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.608 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:19.608 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:19.608 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.867 [2024-12-16 05:38:53.617356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.867 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:20.126 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:20.385 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.765 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:21.765 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:21.765 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.765 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:21.765 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:21.765 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:23.749 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:23.749 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:23.749 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.749 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:23.749 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.749 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:23.749 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:23.749 [global] 00:10:23.749 thread=1 00:10:23.749 invalidate=1 00:10:23.749 rw=write 00:10:23.749 time_based=1 00:10:23.749 runtime=1 00:10:23.749 ioengine=libaio 00:10:23.749 direct=1 00:10:23.749 bs=4096 00:10:23.749 iodepth=1 00:10:23.749 norandommap=0 00:10:23.749 numjobs=1 00:10:23.749 00:10:23.749 verify_dump=1 00:10:23.749 verify_backlog=512 00:10:23.749 verify_state_save=0 00:10:23.749 do_verify=1 00:10:23.749 verify=crc32c-intel 00:10:23.749 [job0] 00:10:23.749 filename=/dev/nvme0n1 00:10:23.749 [job1] 00:10:23.749 filename=/dev/nvme0n2 00:10:23.749 [job2] 00:10:23.749 filename=/dev/nvme0n3 00:10:23.749 [job3] 00:10:23.749 filename=/dev/nvme0n4 00:10:23.749 Could not set queue depth (nvme0n1) 00:10:23.749 Could not set queue depth (nvme0n2) 00:10:23.749 Could not set queue depth (nvme0n3) 00:10:23.749 Could not set queue depth (nvme0n4) 00:10:23.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.749 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.749 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.749 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.749 fio-3.35 00:10:23.749 Starting 4 threads 00:10:25.124 00:10:25.124 job0: (groupid=0, jobs=1): err= 0: pid=3231126: Mon Dec 16 05:38:58 2024 00:10:25.124 read: IOPS=608, BW=2432KiB/s (2491kB/s)(2532KiB/1041msec) 00:10:25.124 slat (nsec): min=7186, max=26937, avg=8607.28, stdev=2685.34 00:10:25.124 clat (usec): min=198, max=42014, avg=1214.19, stdev=6245.53 00:10:25.124 lat (usec): min=206, max=42038, avg=1222.80, stdev=6247.40 00:10:25.124 clat percentiles (usec): 00:10:25.124 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 229], 00:10:25.124 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:10:25.124 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:10:25.124 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:25.124 | 99.99th=[42206] 00:10:25.124 write: IOPS=983, BW=3935KiB/s (4029kB/s)(4096KiB/1041msec); 0 zone resets 00:10:25.124 slat (usec): min=10, max=40734, avg=72.69, stdev=1429.25 00:10:25.124 clat (usec): min=131, max=345, avg=182.29, stdev=34.98 00:10:25.124 lat (usec): min=142, max=40998, avg=254.98, stdev=1434.09 00:10:25.124 clat percentiles (usec): 00:10:25.124 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:25.124 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 182], 00:10:25.124 | 70.00th=[ 192], 80.00th=[ 210], 90.00th=[ 241], 95.00th=[ 253], 00:10:25.124 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 330], 99.95th=[ 347], 00:10:25.124 | 99.99th=[ 347] 00:10:25.124 bw ( KiB/s): min= 4096, max= 4096, per=18.99%, avg=4096.00, stdev= 0.00, samples=2 00:10:25.124 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:25.125 lat (usec) : 250=84.07%, 500=15.03% 00:10:25.125 lat (msec) : 50=0.91% 00:10:25.125 cpu : usr=1.35%, sys=2.60%, ctx=1660, majf=0, minf=1 00:10:25.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 issued rwts: total=633,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.125 job1: (groupid=0, jobs=1): err= 0: pid=3231152: Mon Dec 16 05:38:58 2024 00:10:25.125 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:25.125 slat (nsec): min=6922, max=26005, avg=8258.32, stdev=1996.54 00:10:25.125 clat (usec): min=193, max=41013, avg=410.47, stdev=2539.56 00:10:25.125 lat (usec): min=201, max=41038, avg=418.73, stdev=2540.45 00:10:25.125 clat percentiles (usec): 00:10:25.125 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 233], 00:10:25.125 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:25.125 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 310], 00:10:25.125 | 99.00th=[ 388], 99.50th=[ 502], 99.90th=[41157], 99.95th=[41157], 00:10:25.125 | 99.99th=[41157] 00:10:25.125 write: IOPS=2026, BW=8108KiB/s (8302kB/s)(8116KiB/1001msec); 0 zone resets 00:10:25.125 slat (nsec): min=9696, max=68268, avg=11557.95, stdev=2183.52 00:10:25.125 clat (usec): min=100, max=268, avg=159.25, stdev=20.46 00:10:25.125 lat (usec): min=127, max=322, avg=170.81, stdev=21.20 00:10:25.125 clat percentiles (usec): 00:10:25.125 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:10:25.125 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:10:25.125 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 196], 00:10:25.125 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 227], 99.95th=[ 255], 00:10:25.125 | 99.99th=[ 269] 00:10:25.125 bw ( KiB/s): min= 5112, max= 5112, per=23.70%, avg=5112.00, stdev= 0.00, samples=1 00:10:25.125 iops : min= 1278, max= 1278, avg=1278.00, stdev= 0.00, samples=1 00:10:25.125 lat (usec) : 250=81.51%, 500=18.26%, 750=0.03% 00:10:25.125 lat (msec) : 2=0.03%, 50=0.17% 00:10:25.125 cpu : usr=3.10%, sys=5.50%, ctx=3566, majf=0, minf=2 00:10:25.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 issued rwts: total=1536,2029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.125 job2: (groupid=0, jobs=1): err= 0: pid=3231186: Mon Dec 16 05:38:58 2024 00:10:25.125 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:10:25.125 slat (nsec): min=9644, max=24650, avg=23490.77, stdev=3099.75 00:10:25.125 clat (usec): min=40893, max=41981, avg=41134.80, stdev=356.38 00:10:25.125 lat (usec): min=40917, max=42005, avg=41158.29, stdev=355.34 00:10:25.125 clat percentiles (usec): 00:10:25.125 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:25.125 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:25.125 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:25.125 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:25.125 | 99.99th=[42206] 00:10:25.125 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:10:25.125 slat (usec): min=9, max=21642, avg=53.14, stdev=955.99 00:10:25.125 clat (usec): min=131, max=298, avg=158.91, stdev=14.70 00:10:25.125 lat (usec): min=141, max=21894, avg=212.05, stdev=960.22 00:10:25.125 clat percentiles (usec): 00:10:25.125 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:10:25.125 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:10:25.125 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 182], 00:10:25.125 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 297], 99.95th=[ 297], 00:10:25.125 | 99.99th=[ 297] 00:10:25.125 bw ( KiB/s): min= 4096, max= 4096, per=18.99%, avg=4096.00, stdev= 0.00, samples=1 00:10:25.125 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:25.125 lat (usec) : 250=95.51%, 500=0.37% 00:10:25.125 lat (msec) : 50=4.12% 00:10:25.125 cpu : usr=0.20%, sys=0.59%, ctx=538, majf=0, minf=1 00:10:25.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.125 job3: (groupid=0, jobs=1): err= 0: pid=3231197: Mon Dec 16 05:38:58 2024 00:10:25.125 read: IOPS=1730, BW=6921KiB/s (7087kB/s)(6928KiB/1001msec) 00:10:25.125 slat (nsec): min=6590, max=28579, avg=7621.75, stdev=1236.95 00:10:25.125 clat (usec): min=169, max=41080, avg=377.65, stdev=2395.47 00:10:25.125 lat (usec): min=176, max=41094, avg=385.27, stdev=2395.81 00:10:25.125 clat percentiles (usec): 00:10:25.125 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:10:25.125 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:10:25.125 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 302], 00:10:25.125 | 99.00th=[ 367], 99.50th=[ 457], 99.90th=[41157], 99.95th=[41157], 00:10:25.125 | 99.99th=[41157] 00:10:25.125 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:25.125 slat (nsec): min=9533, max=40340, avg=10637.76, stdev=1415.94 00:10:25.125 clat (usec): min=110, max=292, avg=147.60, stdev=20.21 00:10:25.125 lat (usec): min=121, max=332, avg=158.24, stdev=20.41 00:10:25.125 clat percentiles (usec): 00:10:25.125 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 133], 00:10:25.125 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:10:25.125 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 190], 00:10:25.125 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 227], 99.95th=[ 233], 00:10:25.125 | 99.99th=[ 293] 00:10:25.125 bw ( KiB/s): min=12288, max=12288, per=56.97%, avg=12288.00, stdev= 0.00, samples=1 00:10:25.125 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:25.125 lat (usec) : 250=88.86%, 500=10.98% 00:10:25.125 lat (msec) : 50=0.16% 00:10:25.125 cpu : usr=1.90%, sys=3.70%, ctx=3780, majf=0, minf=2 00:10:25.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.125 issued rwts: total=1732,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.125 00:10:25.125 Run status group 0 (all jobs): 00:10:25.125 READ: bw=14.7MiB/s (15.4MB/s), 86.6KiB/s-6921KiB/s (88.7kB/s-7087kB/s), io=15.3MiB (16.1MB), run=1001-1041msec 00:10:25.125 WRITE: bw=21.1MiB/s (22.1MB/s), 2016KiB/s-8184KiB/s (2064kB/s-8380kB/s), io=21.9MiB (23.0MB), run=1001-1041msec 00:10:25.125 00:10:25.125 Disk stats (read/write): 00:10:25.125 nvme0n1: ios=431/512, merge=0/0, ticks=1310/104, in_queue=1414, util=86.97% 00:10:25.125 nvme0n2: ios=1132/1536, merge=0/0, ticks=570/233, in_queue=803, util=85.68% 00:10:25.125 nvme0n3: ios=40/512, merge=0/0, ticks=1563/81, in_queue=1644, util=92.94% 00:10:25.125 nvme0n4: ios=1643/2048, merge=0/0, ticks=509/292, in_queue=801, util=94.14% 00:10:25.125 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:25.125 [global] 00:10:25.125 thread=1 00:10:25.125 invalidate=1 00:10:25.125 rw=randwrite 00:10:25.125 time_based=1 00:10:25.125 runtime=1 00:10:25.125 ioengine=libaio 00:10:25.125 direct=1 00:10:25.125 bs=4096 00:10:25.125 iodepth=1 00:10:25.125 norandommap=0 00:10:25.125 numjobs=1 00:10:25.125 00:10:25.125 verify_dump=1 00:10:25.125 verify_backlog=512 00:10:25.125 verify_state_save=0 00:10:25.125 do_verify=1 00:10:25.125 verify=crc32c-intel 00:10:25.125 [job0] 00:10:25.125 filename=/dev/nvme0n1 00:10:25.125 [job1] 00:10:25.125 filename=/dev/nvme0n2 00:10:25.125 [job2] 00:10:25.125 filename=/dev/nvme0n3 00:10:25.125 [job3] 00:10:25.125 filename=/dev/nvme0n4 00:10:25.383 Could not set queue depth (nvme0n1) 00:10:25.383 Could not set queue depth (nvme0n2) 00:10:25.383 Could not set queue depth (nvme0n3) 00:10:25.383 Could not set queue depth (nvme0n4) 00:10:25.641 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.641 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.641 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.641 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.641 fio-3.35 00:10:25.641 Starting 4 threads 00:10:27.016 00:10:27.016 job0: (groupid=0, jobs=1): err= 0: pid=3231595: Mon Dec 16 05:39:00 2024 00:10:27.016 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:10:27.016 slat (nsec): min=5748, max=24967, avg=21718.95, stdev=5409.57 00:10:27.016 clat (usec): min=40871, max=42066, avg=41019.82, stdev=236.83 00:10:27.016 lat (usec): min=40896, max=42071, avg=41041.54, stdev=233.15 00:10:27.016 clat percentiles (usec): 00:10:27.016 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:27.016 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:27.016 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:27.016 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.016 | 99.99th=[42206] 00:10:27.016 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:27.016 slat (nsec): min=5276, max=26540, avg=6840.91, stdev=1261.91 00:10:27.016 clat (usec): min=120, max=267, avg=184.86, stdev=16.79 00:10:27.016 lat (usec): min=127, max=275, avg=191.70, stdev=16.83 00:10:27.016 clat percentiles (usec): 00:10:27.016 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:10:27.016 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:10:27.016 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 212], 00:10:27.016 | 99.00th=[ 227], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 269], 00:10:27.016 | 99.99th=[ 269] 00:10:27.016 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.016 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.017 lat (usec) : 250=95.51%, 500=0.37% 00:10:27.017 lat (msec) : 50=4.12% 00:10:27.017 cpu : usr=0.10%, sys=0.40%, ctx=535, majf=0, minf=1 00:10:27.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.017 job1: (groupid=0, jobs=1): err= 0: pid=3231596: Mon Dec 16 05:39:00 2024 00:10:27.017 read: IOPS=22, BW=91.8KiB/s (94.0kB/s)(92.0KiB/1002msec) 00:10:27.017 slat (nsec): min=9302, max=36983, avg=20959.57, stdev=6912.19 00:10:27.017 clat (usec): min=501, max=41104, avg=39193.09, stdev=8435.29 00:10:27.017 lat (usec): min=538, max=41120, avg=39214.05, stdev=8431.82 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 502], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:27.017 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:27.017 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:27.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:27.017 | 99.99th=[41157] 00:10:27.017 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:27.017 slat (nsec): min=10911, max=49580, avg=13931.92, stdev=5499.08 00:10:27.017 clat (usec): min=127, max=356, avg=177.93, stdev=19.22 00:10:27.017 lat (usec): min=138, max=371, avg=191.86, stdev=20.21 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:10:27.017 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:10:27.017 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 206], 00:10:27.017 | 99.00th=[ 243], 99.50th=[ 269], 99.90th=[ 359], 99.95th=[ 359], 00:10:27.017 | 99.99th=[ 359] 00:10:27.017 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.017 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.017 lat (usec) : 250=94.95%, 500=0.75%, 750=0.19% 00:10:27.017 lat (msec) : 50=4.11% 00:10:27.017 cpu : usr=0.00%, sys=1.40%, ctx=537, majf=0, minf=1 00:10:27.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.017 job2: (groupid=0, jobs=1): err= 0: pid=3231597: Mon Dec 16 05:39:00 2024 00:10:27.017 read: IOPS=28, BW=115KiB/s (118kB/s)(120KiB/1040msec) 00:10:27.017 slat (nsec): min=6735, max=25497, avg=18948.60, stdev=6837.42 00:10:27.017 clat (usec): min=201, max=42349, avg=31533.01, stdev=17543.67 00:10:27.017 lat (usec): min=210, max=42357, avg=31551.96, stdev=17547.40 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 334], 00:10:27.017 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:10:27.017 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:27.017 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.017 | 99.99th=[42206] 00:10:27.017 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:27.017 slat (nsec): min=9285, max=40021, avg=10184.89, stdev=1590.37 00:10:27.017 clat (usec): min=139, max=348, avg=168.59, stdev=14.78 00:10:27.017 lat (usec): min=149, max=388, avg=178.77, stdev=15.57 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:27.017 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:27.017 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:10:27.017 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 351], 99.95th=[ 351], 00:10:27.017 | 99.99th=[ 351] 00:10:27.017 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.017 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.017 lat (usec) : 250=95.20%, 500=0.55% 00:10:27.017 lat (msec) : 50=4.24% 00:10:27.017 cpu : usr=0.38%, sys=0.29%, ctx=542, majf=0, minf=2 00:10:27.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.017 job3: (groupid=0, jobs=1): err= 0: pid=3231598: Mon Dec 16 05:39:00 2024 00:10:27.017 read: IOPS=1005, BW=4023KiB/s (4120kB/s)(4180KiB/1039msec) 00:10:27.017 slat (nsec): min=6579, max=28792, avg=7777.18, stdev=2192.43 00:10:27.017 clat (usec): min=176, max=42400, avg=746.01, stdev=4527.88 00:10:27.017 lat (usec): min=184, max=42407, avg=753.79, stdev=4529.26 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 196], 00:10:27.017 | 30.00th=[ 204], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:10:27.017 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:10:27.017 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:10:27.017 | 99.99th=[42206] 00:10:27.017 write: IOPS=1478, BW=5913KiB/s (6055kB/s)(6144KiB/1039msec); 0 zone resets 00:10:27.017 slat (nsec): min=9319, max=37021, avg=10302.30, stdev=1372.19 00:10:27.017 clat (usec): min=112, max=591, avg=149.64, stdev=25.41 00:10:27.017 lat (usec): min=121, max=601, avg=159.94, stdev=25.65 00:10:27.017 clat percentiles (usec): 00:10:27.017 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 131], 00:10:27.017 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 147], 00:10:27.017 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 192], 00:10:27.017 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 289], 99.95th=[ 594], 00:10:27.017 | 99.99th=[ 594] 00:10:27.017 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:27.017 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:27.017 lat (usec) : 250=84.58%, 500=14.84%, 750=0.04% 00:10:27.017 lat (msec) : 10=0.04%, 50=0.50% 00:10:27.017 cpu : usr=1.06%, sys=2.50%, ctx=2582, majf=0, minf=1 00:10:27.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.017 issued rwts: total=1045,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.017 00:10:27.017 Run status group 0 (all jobs): 00:10:27.017 READ: bw=4308KiB/s (4411kB/s), 87.7KiB/s-4023KiB/s (89.8kB/s-4120kB/s), io=4480KiB (4588kB), run=1002-1040msec 00:10:27.017 WRITE: bw=11.5MiB/s (12.1MB/s), 1969KiB/s-5913KiB/s (2016kB/s-6055kB/s), io=12.0MiB (12.6MB), run=1002-1040msec 00:10:27.017 00:10:27.017 Disk stats (read/write): 00:10:27.017 nvme0n1: ios=39/512, merge=0/0, ticks=1563/92, in_queue=1655, util=87.17% 00:10:27.017 nvme0n2: ios=42/512, merge=0/0, ticks=1601/89, in_queue=1690, util=91.21% 00:10:27.017 nvme0n3: ios=74/512, merge=0/0, ticks=773/82, in_queue=855, util=92.18% 00:10:27.017 nvme0n4: ios=1055/1536, merge=0/0, ticks=1465/227, in_queue=1692, util=98.60% 00:10:27.017 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:27.017 [global] 00:10:27.017 thread=1 00:10:27.017 invalidate=1 00:10:27.017 rw=write 00:10:27.017 time_based=1 00:10:27.017 runtime=1 00:10:27.017 ioengine=libaio 00:10:27.017 direct=1 00:10:27.017 bs=4096 00:10:27.017 iodepth=128 00:10:27.017 norandommap=0 00:10:27.017 numjobs=1 00:10:27.017 00:10:27.017 verify_dump=1 00:10:27.017 verify_backlog=512 00:10:27.017 verify_state_save=0 00:10:27.017 do_verify=1 00:10:27.017 verify=crc32c-intel 00:10:27.017 [job0] 00:10:27.017 filename=/dev/nvme0n1 00:10:27.017 [job1] 00:10:27.017 filename=/dev/nvme0n2 00:10:27.017 [job2] 00:10:27.017 filename=/dev/nvme0n3 00:10:27.017 [job3] 00:10:27.017 filename=/dev/nvme0n4 00:10:27.017 Could not set queue depth (nvme0n1) 00:10:27.017 Could not set queue depth (nvme0n2) 00:10:27.017 Could not set queue depth (nvme0n3) 00:10:27.017 Could not set queue depth (nvme0n4) 00:10:27.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.275 fio-3.35 00:10:27.275 Starting 4 threads 00:10:28.652 00:10:28.652 job0: (groupid=0, jobs=1): err= 0: pid=3232019: Mon Dec 16 05:39:02 2024 00:10:28.652 read: IOPS=3293, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1008msec) 00:10:28.652 slat (nsec): min=1055, max=21187k, avg=136547.41, stdev=1111853.94 00:10:28.652 clat (msec): min=3, max=111, avg=19.03, stdev=19.19 00:10:28.652 lat (msec): min=3, max=111, avg=19.17, stdev=19.33 00:10:28.652 clat percentiles (msec): 00:10:28.652 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:10:28.652 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:10:28.652 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 36], 95.00th=[ 64], 00:10:28.652 | 99.00th=[ 101], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 112], 00:10:28.652 | 99.99th=[ 112] 00:10:28.652 write: IOPS=3706, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1008msec); 0 zone resets 00:10:28.652 slat (usec): min=2, max=23368, avg=119.35, stdev=943.81 00:10:28.652 clat (usec): min=1132, max=81999, avg=17347.54, stdev=13396.87 00:10:28.652 lat (usec): min=1143, max=82009, avg=17466.89, stdev=13475.37 00:10:28.652 clat percentiles (usec): 00:10:28.652 | 1.00th=[ 3687], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 8291], 00:10:28.652 | 30.00th=[10290], 40.00th=[11338], 50.00th=[12125], 60.00th=[14353], 00:10:28.652 | 70.00th=[17171], 80.00th=[26084], 90.00th=[32637], 95.00th=[46400], 00:10:28.652 | 99.00th=[64226], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:10:28.652 | 99.99th=[82314] 00:10:28.652 bw ( KiB/s): min=12896, max=16007, per=21.36%, avg=14451.50, stdev=2199.81, samples=2 00:10:28.652 iops : min= 3224, max= 4001, avg=3612.50, stdev=549.42, samples=2 00:10:28.652 lat (msec) : 2=0.03%, 4=1.06%, 10=24.31%, 20=50.95%, 50=18.54% 00:10:28.652 lat (msec) : 100=4.69%, 250=0.43% 00:10:28.652 cpu : usr=2.78%, sys=3.97%, ctx=228, majf=0, minf=1 00:10:28.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:28.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.652 issued rwts: total=3320,3736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.652 job1: (groupid=0, jobs=1): err= 0: pid=3232020: Mon Dec 16 05:39:02 2024 00:10:28.652 read: IOPS=4430, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1007msec) 00:10:28.652 slat (nsec): min=1102, max=58380k, avg=105237.42, stdev=1105174.83 00:10:28.652 clat (usec): min=958, max=67250, avg=13134.75, stdev=9065.76 00:10:28.652 lat (usec): min=5213, max=67255, avg=13239.99, stdev=9105.09 00:10:28.652 clat percentiles (usec): 00:10:28.652 | 1.00th=[ 5997], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8979], 00:10:28.652 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[12256], 00:10:28.652 | 70.00th=[12911], 80.00th=[13566], 90.00th=[16057], 95.00th=[21890], 00:10:28.652 | 99.00th=[66847], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:10:28.652 | 99.99th=[67634] 00:10:28.652 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:10:28.652 slat (nsec): min=1930, max=18351k, avg=109326.39, stdev=722516.35 00:10:28.652 clat (usec): min=1030, max=42440, avg=14371.89, stdev=7199.43 00:10:28.652 lat (usec): min=1041, max=42448, avg=14481.21, stdev=7253.81 00:10:28.652 clat percentiles (usec): 00:10:28.652 | 1.00th=[ 5407], 5.00th=[ 7570], 10.00th=[ 8291], 20.00th=[ 9765], 00:10:28.652 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11731], 60.00th=[13173], 00:10:28.652 | 70.00th=[15008], 80.00th=[18220], 90.00th=[26608], 95.00th=[30540], 00:10:28.652 | 99.00th=[38011], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:28.652 | 99.99th=[42206] 00:10:28.652 bw ( KiB/s): min=16990, max=19840, per=27.22%, avg=18415.00, stdev=2015.25, samples=2 00:10:28.652 iops : min= 4247, max= 4960, avg=4603.50, stdev=504.17, samples=2 00:10:28.652 lat (usec) : 1000=0.01% 00:10:28.652 lat (msec) : 2=0.08%, 4=0.14%, 10=28.73%, 20=59.53%, 50=10.11% 00:10:28.652 lat (msec) : 100=1.40% 00:10:28.652 cpu : usr=2.88%, sys=4.57%, ctx=357, majf=0, minf=1 00:10:28.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:28.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.652 issued rwts: total=4462,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.652 job2: (groupid=0, jobs=1): err= 0: pid=3232021: Mon Dec 16 05:39:02 2024 00:10:28.652 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1004msec) 00:10:28.652 slat (nsec): min=1188, max=18977k, avg=145011.26, stdev=897409.94 00:10:28.652 clat (usec): min=2413, max=60228, avg=17505.75, stdev=8086.55 00:10:28.652 lat (usec): min=4228, max=60255, avg=17650.76, stdev=8140.09 00:10:28.652 clat percentiles (usec): 00:10:28.652 | 1.00th=[ 7504], 5.00th=[ 9503], 10.00th=[11076], 20.00th=[11994], 00:10:28.652 | 30.00th=[12518], 40.00th=[13304], 50.00th=[15533], 60.00th=[16450], 00:10:28.652 | 70.00th=[18744], 80.00th=[20579], 90.00th=[29754], 95.00th=[35390], 00:10:28.652 | 99.00th=[48497], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:10:28.652 | 99.99th=[60031] 00:10:28.652 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:28.652 slat (usec): min=2, max=15363, avg=129.43, stdev=838.62 00:10:28.652 clat (usec): min=6754, max=62452, avg=18063.42, stdev=9361.84 00:10:28.652 lat (usec): min=6760, max=62477, avg=18192.85, stdev=9447.72 00:10:28.652 clat percentiles (usec): 00:10:28.652 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[11731], 00:10:28.652 | 30.00th=[12125], 40.00th=[13173], 50.00th=[14091], 60.00th=[15533], 00:10:28.652 | 70.00th=[20055], 80.00th=[22938], 90.00th=[29492], 95.00th=[42730], 00:10:28.652 | 99.00th=[53216], 99.50th=[54264], 99.90th=[56361], 99.95th=[60556], 00:10:28.652 | 99.99th=[62653] 00:10:28.652 bw ( KiB/s): min=12288, max=16351, per=21.17%, avg=14319.50, stdev=2872.97, samples=2 00:10:28.652 iops : min= 3072, max= 4087, avg=3579.50, stdev=717.71, samples=2 00:10:28.652 lat (msec) : 4=0.01%, 10=6.25%, 20=67.68%, 50=25.06%, 100=1.01% 00:10:28.652 cpu : usr=2.69%, sys=4.09%, ctx=358, majf=0, minf=1 00:10:28.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:28.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.653 issued rwts: total=3556,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.653 job3: (groupid=0, jobs=1): err= 0: pid=3232022: Mon Dec 16 05:39:02 2024 00:10:28.653 read: IOPS=4883, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1007msec) 00:10:28.653 slat (nsec): min=1265, max=20064k, avg=104696.36, stdev=789665.38 00:10:28.653 clat (usec): min=2815, max=57202, avg=13416.58, stdev=5854.63 00:10:28.653 lat (usec): min=2821, max=57236, avg=13521.27, stdev=5893.03 00:10:28.653 clat percentiles (usec): 00:10:28.653 | 1.00th=[ 4490], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[10552], 00:10:28.653 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12256], 60.00th=[12911], 00:10:28.653 | 70.00th=[13566], 80.00th=[14877], 90.00th=[17957], 95.00th=[20841], 00:10:28.653 | 99.00th=[44827], 99.50th=[48497], 99.90th=[52167], 99.95th=[52167], 00:10:28.653 | 99.99th=[57410] 00:10:28.653 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:28.653 slat (usec): min=2, max=13379, avg=84.31, stdev=609.47 00:10:28.653 clat (usec): min=1221, max=41704, avg=11998.94, stdev=5064.37 00:10:28.653 lat (usec): min=1231, max=41712, avg=12083.25, stdev=5101.83 00:10:28.653 clat percentiles (usec): 00:10:28.653 | 1.00th=[ 3556], 5.00th=[ 5932], 10.00th=[ 7570], 20.00th=[ 9372], 00:10:28.653 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:10:28.653 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15401], 95.00th=[17171], 00:10:28.653 | 99.00th=[37487], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:28.653 | 99.99th=[41681] 00:10:28.653 bw ( KiB/s): min=20432, max=20487, per=30.24%, avg=20459.50, stdev=38.89, samples=2 00:10:28.653 iops : min= 5108, max= 5121, avg=5114.50, stdev= 9.19, samples=2 00:10:28.653 lat (msec) : 2=0.02%, 4=0.83%, 10=20.18%, 20=74.14%, 50=4.76% 00:10:28.653 lat (msec) : 100=0.07% 00:10:28.653 cpu : usr=3.48%, sys=6.36%, ctx=379, majf=0, minf=1 00:10:28.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:28.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.653 issued rwts: total=4918,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.653 00:10:28.653 Run status group 0 (all jobs): 00:10:28.653 READ: bw=63.0MiB/s (66.1MB/s), 12.9MiB/s-19.1MiB/s (13.5MB/s-20.0MB/s), io=63.5MiB (66.6MB), run=1004-1008msec 00:10:28.653 WRITE: bw=66.1MiB/s (69.3MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.8MB/s), io=66.6MiB (69.8MB), run=1004-1008msec 00:10:28.653 00:10:28.653 Disk stats (read/write): 00:10:28.653 nvme0n1: ios=2610/2610, merge=0/0, ticks=28071/23719, in_queue=51790, util=94.09% 00:10:28.653 nvme0n2: ios=4138/4111, merge=0/0, ticks=42567/48749, in_queue=91316, util=98.68% 00:10:28.653 nvme0n3: ios=3072/3480, merge=0/0, ticks=17188/17765, in_queue=34953, util=87.51% 00:10:28.653 nvme0n4: ios=4176/4608, merge=0/0, ticks=39498/40671, in_queue=80169, util=98.95% 00:10:28.653 05:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:28.653 [global] 00:10:28.653 thread=1 00:10:28.653 invalidate=1 00:10:28.653 rw=randwrite 00:10:28.653 time_based=1 00:10:28.653 runtime=1 00:10:28.653 ioengine=libaio 00:10:28.653 direct=1 00:10:28.653 bs=4096 00:10:28.653 iodepth=128 00:10:28.653 norandommap=0 00:10:28.653 numjobs=1 00:10:28.653 00:10:28.653 verify_dump=1 00:10:28.653 verify_backlog=512 00:10:28.653 verify_state_save=0 00:10:28.653 do_verify=1 00:10:28.653 verify=crc32c-intel 00:10:28.653 [job0] 00:10:28.653 filename=/dev/nvme0n1 00:10:28.653 [job1] 00:10:28.653 filename=/dev/nvme0n2 00:10:28.653 [job2] 00:10:28.653 filename=/dev/nvme0n3 00:10:28.653 [job3] 00:10:28.653 filename=/dev/nvme0n4 00:10:28.653 Could not set queue depth (nvme0n1) 00:10:28.653 Could not set queue depth (nvme0n2) 00:10:28.653 Could not set queue depth (nvme0n3) 00:10:28.653 Could not set queue depth (nvme0n4) 00:10:28.653 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.653 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.653 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.653 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.653 fio-3.35 00:10:28.653 Starting 4 threads 00:10:30.031 00:10:30.031 job0: (groupid=0, jobs=1): err= 0: pid=3232457: Mon Dec 16 05:39:03 2024 00:10:30.031 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:10:30.031 slat (nsec): min=1239, max=25117k, avg=143116.57, stdev=1029246.70 00:10:30.031 clat (usec): min=3411, max=67831, avg=18734.58, stdev=11615.47 00:10:30.031 lat (usec): min=3419, max=67838, avg=18877.69, stdev=11685.59 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 5211], 5.00th=[ 8160], 10.00th=[10683], 20.00th=[11338], 00:10:30.031 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13304], 60.00th=[15008], 00:10:30.031 | 70.00th=[18482], 80.00th=[22938], 90.00th=[39584], 95.00th=[45351], 00:10:30.031 | 99.00th=[52167], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:30.031 | 99.99th=[67634] 00:10:30.031 write: IOPS=3598, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1003msec); 0 zone resets 00:10:30.031 slat (usec): min=2, max=24005, avg=122.95, stdev=904.59 00:10:30.031 clat (usec): min=1183, max=37881, avg=16586.57, stdev=7793.82 00:10:30.031 lat (usec): min=1192, max=43181, avg=16709.53, stdev=7852.59 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 3228], 5.00th=[ 6718], 10.00th=[ 9503], 20.00th=[11469], 00:10:30.031 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13435], 60.00th=[16057], 00:10:30.031 | 70.00th=[17957], 80.00th=[22152], 90.00th=[28967], 95.00th=[33162], 00:10:30.031 | 99.00th=[35914], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:30.031 | 99.99th=[38011] 00:10:30.031 bw ( KiB/s): min=12288, max=16384, per=20.33%, avg=14336.00, stdev=2896.31, samples=2 00:10:30.031 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:30.031 lat (msec) : 2=0.04%, 4=1.13%, 10=8.63%, 20=65.08%, 50=24.20% 00:10:30.031 lat (msec) : 100=0.92% 00:10:30.031 cpu : usr=2.10%, sys=4.89%, ctx=295, majf=0, minf=1 00:10:30.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:30.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.031 issued rwts: total=3584,3609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.031 job1: (groupid=0, jobs=1): err= 0: pid=3232458: Mon Dec 16 05:39:03 2024 00:10:30.031 read: IOPS=5130, BW=20.0MiB/s (21.0MB/s)(21.0MiB/1047msec) 00:10:30.031 slat (nsec): min=1528, max=8648.2k, avg=95608.08, stdev=569969.83 00:10:30.031 clat (usec): min=1643, max=52820, avg=12906.45, stdev=6857.55 00:10:30.031 lat (usec): min=1653, max=58473, avg=13002.06, stdev=6884.46 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 7177], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[10159], 00:10:30.031 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11600], 60.00th=[12125], 00:10:30.031 | 70.00th=[12387], 80.00th=[13042], 90.00th=[15533], 95.00th=[22152], 00:10:30.031 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:10:30.031 | 99.99th=[52691] 00:10:30.031 write: IOPS=5379, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1047msec); 0 zone resets 00:10:30.031 slat (usec): min=2, max=10012, avg=80.55, stdev=406.98 00:10:30.031 clat (usec): min=1546, max=23018, avg=11234.02, stdev=1573.92 00:10:30.031 lat (usec): min=1561, max=23070, avg=11314.57, stdev=1614.78 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 6783], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10290], 00:10:30.031 | 30.00th=[10552], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:10:30.031 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13435], 00:10:30.031 | 99.00th=[15533], 99.50th=[16450], 99.90th=[20841], 99.95th=[21103], 00:10:30.031 | 99.99th=[22938] 00:10:30.031 bw ( KiB/s): min=21512, max=23544, per=31.95%, avg=22528.00, stdev=1436.84, samples=2 00:10:30.031 iops : min= 5378, max= 5886, avg=5632.00, stdev=359.21, samples=2 00:10:30.031 lat (msec) : 2=0.15%, 4=0.32%, 10=13.99%, 20=82.23%, 50=2.73% 00:10:30.031 lat (msec) : 100=0.57% 00:10:30.031 cpu : usr=3.44%, sys=6.79%, ctx=665, majf=0, minf=1 00:10:30.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:30.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.031 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.031 job2: (groupid=0, jobs=1): err= 0: pid=3232459: Mon Dec 16 05:39:03 2024 00:10:30.031 read: IOPS=4906, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1007msec) 00:10:30.031 slat (nsec): min=1566, max=12587k, avg=108648.82, stdev=793025.04 00:10:30.031 clat (usec): min=4520, max=26951, avg=13538.45, stdev=3420.42 00:10:30.031 lat (usec): min=4526, max=26964, avg=13647.10, stdev=3465.96 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 5014], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11600], 00:10:30.031 | 30.00th=[11863], 40.00th=[12125], 50.00th=[13304], 60.00th=[13698], 00:10:30.031 | 70.00th=[14353], 80.00th=[16057], 90.00th=[18744], 95.00th=[20317], 00:10:30.031 | 99.00th=[22938], 99.50th=[23987], 99.90th=[25297], 99.95th=[25560], 00:10:30.031 | 99.99th=[26870] 00:10:30.031 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:30.031 slat (usec): min=2, max=11419, avg=84.25, stdev=505.43 00:10:30.031 clat (usec): min=2976, max=25428, avg=11825.75, stdev=2880.43 00:10:30.031 lat (usec): min=2999, max=25433, avg=11910.00, stdev=2919.24 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 4178], 5.00th=[ 6128], 10.00th=[ 8160], 20.00th=[10814], 00:10:30.031 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:10:30.031 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14746], 95.00th=[17695], 00:10:30.031 | 99.00th=[19530], 99.50th=[21103], 99.90th=[23462], 99.95th=[23725], 00:10:30.031 | 99.99th=[25560] 00:10:30.031 bw ( KiB/s): min=20480, max=20480, per=29.04%, avg=20480.00, stdev= 0.00, samples=2 00:10:30.031 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:30.031 lat (msec) : 4=0.42%, 10=13.47%, 20=82.53%, 50=3.59% 00:10:30.031 cpu : usr=3.48%, sys=6.56%, ctx=551, majf=0, minf=1 00:10:30.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:30.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.031 issued rwts: total=4941,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.031 job3: (groupid=0, jobs=1): err= 0: pid=3232460: Mon Dec 16 05:39:03 2024 00:10:30.031 read: IOPS=3850, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1005msec) 00:10:30.031 slat (nsec): min=1640, max=15846k, avg=123438.54, stdev=814001.32 00:10:30.031 clat (usec): min=3097, max=38886, avg=15808.80, stdev=3793.89 00:10:30.031 lat (usec): min=5583, max=38910, avg=15932.23, stdev=3861.96 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 7963], 5.00th=[10421], 10.00th=[12518], 20.00th=[13435], 00:10:30.031 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[15270], 00:10:30.031 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20317], 95.00th=[22938], 00:10:30.031 | 99.00th=[28181], 99.50th=[28443], 99.90th=[29230], 99.95th=[36439], 00:10:30.031 | 99.99th=[39060] 00:10:30.031 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:10:30.031 slat (nsec): min=1992, max=10327k, avg=118798.05, stdev=668248.95 00:10:30.031 clat (usec): min=1623, max=33088, avg=16198.42, stdev=6057.94 00:10:30.031 lat (usec): min=1651, max=34060, avg=16317.22, stdev=6118.54 00:10:30.031 clat percentiles (usec): 00:10:30.031 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[10814], 20.00th=[11469], 00:10:30.031 | 30.00th=[12256], 40.00th=[13698], 50.00th=[14353], 60.00th=[15664], 00:10:30.031 | 70.00th=[16909], 80.00th=[21365], 90.00th=[26346], 95.00th=[30540], 00:10:30.031 | 99.00th=[32637], 99.50th=[32637], 99.90th=[33162], 99.95th=[33162], 00:10:30.031 | 99.99th=[33162] 00:10:30.031 bw ( KiB/s): min=16368, max=16400, per=23.24%, avg=16384.00, stdev=22.63, samples=2 00:10:30.031 iops : min= 4092, max= 4100, avg=4096.00, stdev= 5.66, samples=2 00:10:30.031 lat (msec) : 2=0.03%, 4=0.01%, 10=4.82%, 20=78.82%, 50=16.32% 00:10:30.031 cpu : usr=2.89%, sys=5.78%, ctx=382, majf=0, minf=2 00:10:30.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:30.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.031 issued rwts: total=3870,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.031 00:10:30.031 Run status group 0 (all jobs): 00:10:30.031 READ: bw=66.3MiB/s (69.5MB/s), 14.0MiB/s-20.0MiB/s (14.6MB/s-21.0MB/s), io=69.4MiB (72.8MB), run=1003-1047msec 00:10:30.031 WRITE: bw=68.9MiB/s (72.2MB/s), 14.1MiB/s-21.0MiB/s (14.7MB/s-22.0MB/s), io=72.1MiB (75.6MB), run=1003-1047msec 00:10:30.031 00:10:30.031 Disk stats (read/write): 00:10:30.031 nvme0n1: ios=2613/3072, merge=0/0, ticks=23599/25953, in_queue=49552, util=97.49% 00:10:30.031 nvme0n2: ios=4185/4608, merge=0/0, ticks=28590/26972, in_queue=55562, util=97.02% 00:10:30.031 nvme0n3: ios=4086/4103, merge=0/0, ticks=52833/46308, in_queue=99141, util=95.88% 00:10:30.031 nvme0n4: ios=2844/3072, merge=0/0, ticks=28961/29205, in_queue=58166, util=88.85% 00:10:30.031 05:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:30.031 05:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3232689 00:10:30.031 05:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:30.031 05:39:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:30.031 [global] 00:10:30.031 thread=1 00:10:30.031 invalidate=1 00:10:30.031 rw=read 00:10:30.031 time_based=1 00:10:30.031 runtime=10 00:10:30.032 ioengine=libaio 00:10:30.032 direct=1 00:10:30.032 bs=4096 00:10:30.032 iodepth=1 00:10:30.032 norandommap=1 00:10:30.032 numjobs=1 00:10:30.032 00:10:30.032 [job0] 00:10:30.032 filename=/dev/nvme0n1 00:10:30.032 [job1] 00:10:30.032 filename=/dev/nvme0n2 00:10:30.032 [job2] 00:10:30.032 filename=/dev/nvme0n3 00:10:30.032 [job3] 00:10:30.032 filename=/dev/nvme0n4 00:10:30.032 Could not set queue depth (nvme0n1) 00:10:30.032 Could not set queue depth (nvme0n2) 00:10:30.032 Could not set queue depth (nvme0n3) 00:10:30.032 Could not set queue depth (nvme0n4) 00:10:30.290 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.290 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.290 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.290 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.290 fio-3.35 00:10:30.290 Starting 4 threads 00:10:33.578 05:39:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:33.578 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3108864, buflen=4096 00:10:33.578 fio: pid=3232829, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.578 05:39:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:33.578 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7200768, buflen=4096 00:10:33.578 fio: pid=3232828, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.578 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.578 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:33.836 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=22384640, buflen=4096 00:10:33.837 fio: pid=3232826, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.837 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.837 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:33.837 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54939648, buflen=4096 00:10:33.837 fio: pid=3232827, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.837 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.837 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:34.096 00:10:34.096 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3232826: Mon Dec 16 05:39:07 2024 00:10:34.096 read: IOPS=1706, BW=6825KiB/s (6989kB/s)(21.3MiB/3203msec) 00:10:34.096 slat (usec): min=6, max=15670, avg=11.49, stdev=237.14 00:10:34.096 clat (usec): min=193, max=42047, avg=569.58, stdev=3532.74 00:10:34.096 lat (usec): min=200, max=42071, avg=581.06, stdev=3541.58 00:10:34.096 clat percentiles (usec): 00:10:34.096 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:10:34.096 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:10:34.096 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 371], 00:10:34.096 | 99.00th=[ 478], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:10:34.096 | 99.99th=[42206] 00:10:34.096 bw ( KiB/s): min= 960, max=15208, per=27.22%, avg=6830.00, stdev=6319.02, samples=6 00:10:34.096 iops : min= 240, max= 3802, avg=1707.50, stdev=1579.75, samples=6 00:10:34.096 lat (usec) : 250=42.48%, 500=56.66%, 750=0.04%, 1000=0.02% 00:10:34.096 lat (msec) : 2=0.02%, 20=0.02%, 50=0.75% 00:10:34.096 cpu : usr=0.25%, sys=1.72%, ctx=5470, majf=0, minf=1 00:10:34.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.096 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.096 issued rwts: total=5466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.096 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3232827: Mon Dec 16 05:39:07 2024 00:10:34.096 read: IOPS=3932, BW=15.4MiB/s (16.1MB/s)(52.4MiB/3411msec) 00:10:34.096 slat (usec): min=6, max=26652, avg=15.51, stdev=379.03 00:10:34.096 clat (usec): min=159, max=41015, avg=235.42, stdev=799.35 00:10:34.096 lat (usec): min=166, max=41024, avg=250.93, stdev=885.54 00:10:34.096 clat percentiles (usec): 00:10:34.096 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:10:34.096 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:10:34.096 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 262], 00:10:34.096 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 603], 99.95th=[ 9110], 00:10:34.096 | 99.99th=[41157] 00:10:34.096 bw ( KiB/s): min=13112, max=18760, per=64.28%, avg=16128.00, stdev=2467.75, samples=6 00:10:34.096 iops : min= 3278, max= 4690, avg=4032.00, stdev=616.94, samples=6 00:10:34.096 lat (usec) : 250=93.60%, 500=6.28%, 750=0.02%, 1000=0.01% 00:10:34.096 lat (msec) : 4=0.01%, 10=0.02%, 20=0.01%, 50=0.04% 00:10:34.096 cpu : usr=1.88%, sys=6.16%, ctx=13420, majf=0, minf=2 00:10:34.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.096 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.096 issued rwts: total=13414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.096 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3232828: Mon Dec 16 05:39:07 2024 00:10:34.096 read: IOPS=594, BW=2376KiB/s (2434kB/s)(7032KiB/2959msec) 00:10:34.096 slat (nsec): min=6566, max=33451, avg=8476.42, stdev=3721.01 00:10:34.096 clat (usec): min=212, max=41412, avg=1660.71, stdev=7388.02 00:10:34.096 lat (usec): min=220, max=41420, avg=1669.18, stdev=7390.20 00:10:34.096 clat percentiles (usec): 00:10:34.096 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:10:34.096 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:10:34.096 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 355], 00:10:34.096 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:34.096 | 99.99th=[41157] 00:10:34.096 bw ( KiB/s): min= 104, max= 1720, per=1.93%, avg=483.20, stdev=699.11, samples=5 00:10:34.096 iops : min= 26, max= 430, avg=120.80, stdev=174.78, samples=5 00:10:34.096 lat (usec) : 250=29.56%, 500=66.86%, 750=0.06% 00:10:34.096 lat (msec) : 2=0.06%, 50=3.41% 00:10:34.096 cpu : usr=0.17%, sys=0.68%, ctx=1759, majf=0, minf=2 00:10:34.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.096 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.096 issued rwts: total=1759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.096 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3232829: Mon Dec 16 05:39:07 2024 00:10:34.097 read: IOPS=278, BW=1111KiB/s (1138kB/s)(3036KiB/2732msec) 00:10:34.097 slat (nsec): min=7116, max=35733, avg=9888.92, stdev=4314.23 00:10:34.097 clat (usec): min=190, max=42059, avg=3557.58, stdev=11071.53 00:10:34.097 lat (usec): min=198, max=42082, avg=3567.47, stdev=11074.65 00:10:34.097 clat percentiles (usec): 00:10:34.097 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 243], 00:10:34.097 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 314], 00:10:34.097 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[41157], 00:10:34.097 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:34.097 | 99.99th=[42206] 00:10:34.097 bw ( KiB/s): min= 104, max= 944, per=1.66%, avg=417.60, stdev=341.92, samples=5 00:10:34.097 iops : min= 26, max= 236, avg=104.40, stdev=85.48, samples=5 00:10:34.097 lat (usec) : 250=23.03%, 500=68.68%, 750=0.13% 00:10:34.097 lat (msec) : 50=8.03% 00:10:34.097 cpu : usr=0.22%, sys=0.40%, ctx=762, majf=0, minf=2 00:10:34.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.097 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.097 issued rwts: total=760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.097 00:10:34.097 Run status group 0 (all jobs): 00:10:34.097 READ: bw=24.5MiB/s (25.7MB/s), 1111KiB/s-15.4MiB/s (1138kB/s-16.1MB/s), io=83.6MiB (87.6MB), run=2732-3411msec 00:10:34.097 00:10:34.097 Disk stats (read/write): 00:10:34.097 nvme0n1: ios=5319/0, merge=0/0, ticks=3025/0, in_queue=3025, util=94.98% 00:10:34.097 nvme0n2: ios=13240/0, merge=0/0, ticks=2976/0, in_queue=2976, util=93.53% 00:10:34.097 nvme0n3: ios=1437/0, merge=0/0, ticks=2827/0, in_queue=2827, util=96.52% 00:10:34.097 nvme0n4: ios=454/0, merge=0/0, ticks=3558/0, in_queue=3558, util=99.07% 00:10:34.097 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.097 05:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:34.355 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.355 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:34.614 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.614 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:34.873 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.873 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:34.873 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:34.873 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3232689 00:10:34.873 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:34.873 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:35.131 nvmf hotplug test: fio failed as expected 00:10:35.131 05:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.390 rmmod nvme_tcp 00:10:35.390 rmmod nvme_fabrics 00:10:35.390 rmmod nvme_keyring 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3229689 ']' 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3229689 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3229689 ']' 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3229689 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3229689 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3229689' 00:10:35.390 killing process with pid 3229689 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3229689 00:10:35.390 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3229689 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.651 05:39:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.187 00:10:38.187 real 0m26.672s 00:10:38.187 user 1m47.774s 00:10:38.187 sys 0m8.333s 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.187 ************************************ 00:10:38.187 END TEST nvmf_fio_target 00:10:38.187 ************************************ 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.187 ************************************ 00:10:38.187 START TEST nvmf_bdevio 00:10:38.187 ************************************ 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.187 * Looking for test storage... 00:10:38.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.187 --rc genhtml_branch_coverage=1 00:10:38.187 --rc genhtml_function_coverage=1 00:10:38.187 --rc genhtml_legend=1 00:10:38.187 --rc geninfo_all_blocks=1 00:10:38.187 --rc geninfo_unexecuted_blocks=1 00:10:38.187 00:10:38.187 ' 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.187 --rc genhtml_branch_coverage=1 00:10:38.187 --rc genhtml_function_coverage=1 00:10:38.187 --rc genhtml_legend=1 00:10:38.187 --rc geninfo_all_blocks=1 00:10:38.187 --rc geninfo_unexecuted_blocks=1 00:10:38.187 00:10:38.187 ' 00:10:38.187 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.187 --rc genhtml_branch_coverage=1 00:10:38.187 --rc genhtml_function_coverage=1 00:10:38.187 --rc genhtml_legend=1 00:10:38.187 --rc geninfo_all_blocks=1 00:10:38.187 --rc geninfo_unexecuted_blocks=1 00:10:38.187 00:10:38.187 ' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.188 --rc genhtml_branch_coverage=1 00:10:38.188 --rc genhtml_function_coverage=1 00:10:38.188 --rc genhtml_legend=1 00:10:38.188 --rc geninfo_all_blocks=1 00:10:38.188 --rc geninfo_unexecuted_blocks=1 00:10:38.188 00:10:38.188 ' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.188 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:43.461 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:43.461 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:43.462 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:43.462 Found net devices under 0000:af:00.0: cvl_0_0 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:43.462 Found net devices under 0000:af:00.1: cvl_0_1 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.462 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:10:43.462 00:10:43.462 --- 10.0.0.2 ping statistics --- 00:10:43.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.462 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:10:43.462 00:10:43.462 --- 10.0.0.1 ping statistics --- 00:10:43.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.462 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3237601 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3237601 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3237601 ']' 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.462 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.462 [2024-12-16 05:39:17.135925] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:43.462 [2024-12-16 05:39:17.135968] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.462 [2024-12-16 05:39:17.194936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.462 [2024-12-16 05:39:17.233424] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.462 [2024-12-16 05:39:17.233466] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.462 [2024-12-16 05:39:17.233473] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.462 [2024-12-16 05:39:17.233479] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.462 [2024-12-16 05:39:17.233484] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.462 [2024-12-16 05:39:17.233603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:43.462 [2024-12-16 05:39:17.233712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:43.462 [2024-12-16 05:39:17.233798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.462 [2024-12-16 05:39:17.233800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.721 [2024-12-16 05:39:17.384629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.721 Malloc0 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.721 [2024-12-16 05:39:17.427923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:43.721 { 00:10:43.721 "params": { 00:10:43.721 "name": "Nvme$subsystem", 00:10:43.721 "trtype": "$TEST_TRANSPORT", 00:10:43.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.721 "adrfam": "ipv4", 00:10:43.721 "trsvcid": "$NVMF_PORT", 00:10:43.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.721 "hdgst": ${hdgst:-false}, 00:10:43.721 "ddgst": ${ddgst:-false} 00:10:43.721 }, 00:10:43.721 "method": "bdev_nvme_attach_controller" 00:10:43.721 } 00:10:43.721 EOF 00:10:43.721 )") 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:43.721 05:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:43.721 "params": { 00:10:43.721 "name": "Nvme1", 00:10:43.721 "trtype": "tcp", 00:10:43.721 "traddr": "10.0.0.2", 00:10:43.721 "adrfam": "ipv4", 00:10:43.721 "trsvcid": "4420", 00:10:43.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:43.721 "hdgst": false, 00:10:43.721 "ddgst": false 00:10:43.721 }, 00:10:43.721 "method": "bdev_nvme_attach_controller" 00:10:43.721 }' 00:10:43.721 [2024-12-16 05:39:17.475780] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:43.721 [2024-12-16 05:39:17.475820] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237624 ] 00:10:43.721 [2024-12-16 05:39:17.531369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:43.721 [2024-12-16 05:39:17.572552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.721 [2024-12-16 05:39:17.572570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.721 [2024-12-16 05:39:17.572571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.288 I/O targets: 00:10:44.288 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:44.288 00:10:44.288 00:10:44.288 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.288 http://cunit.sourceforge.net/ 00:10:44.288 00:10:44.288 00:10:44.288 Suite: bdevio tests on: Nvme1n1 00:10:44.288 Test: blockdev write read block ...passed 00:10:44.288 Test: blockdev write zeroes read block ...passed 00:10:44.288 Test: blockdev write zeroes read no split ...passed 00:10:44.288 Test: blockdev write zeroes read split ...passed 00:10:44.288 Test: blockdev write zeroes read split partial ...passed 00:10:44.288 Test: blockdev reset ...[2024-12-16 05:39:18.032817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:44.288 [2024-12-16 05:39:18.032885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2411950 (9): Bad file descriptor 00:10:44.288 [2024-12-16 05:39:18.130131] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:44.288 passed 00:10:44.547 Test: blockdev write read 8 blocks ...passed 00:10:44.547 Test: blockdev write read size > 128k ...passed 00:10:44.547 Test: blockdev write read invalid size ...passed 00:10:44.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:44.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:44.547 Test: blockdev write read max offset ...passed 00:10:44.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:44.547 Test: blockdev writev readv 8 blocks ...passed 00:10:44.547 Test: blockdev writev readv 30 x 1block ...passed 00:10:44.547 Test: blockdev writev readv block ...passed 00:10:44.547 Test: blockdev writev readv size > 128k ...passed 00:10:44.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:44.547 Test: blockdev comparev and writev ...[2024-12-16 05:39:18.343453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.343486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:44.547 [2024-12-16 05:39:18.343500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.343508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:44.547 [2024-12-16 05:39:18.343766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.343776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:44.547 [2024-12-16 05:39:18.343788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.343795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:44.547 [2024-12-16 05:39:18.344030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.344041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:44.547 [2024-12-16 05:39:18.344052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.344059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:44.547 [2024-12-16 05:39:18.344302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.344312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:44.547 [2024-12-16 05:39:18.344324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.547 [2024-12-16 05:39:18.344330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:44.547 passed 00:10:44.807 Test: blockdev nvme passthru rw ...passed 00:10:44.807 Test: blockdev nvme passthru vendor specific ...[2024-12-16 05:39:18.428141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.807 [2024-12-16 05:39:18.428159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:44.807 [2024-12-16 05:39:18.428271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.807 [2024-12-16 05:39:18.428281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:44.807 [2024-12-16 05:39:18.428391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.807 [2024-12-16 05:39:18.428401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:44.807 [2024-12-16 05:39:18.428505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.807 [2024-12-16 05:39:18.428516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:44.807 passed 00:10:44.807 Test: blockdev nvme admin passthru ...passed 00:10:44.807 Test: blockdev copy ...passed 00:10:44.807 00:10:44.807 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.807 suites 1 1 n/a 0 0 00:10:44.807 tests 23 23 23 0 0 00:10:44.807 asserts 152 152 152 0 n/a 00:10:44.807 00:10:44.807 Elapsed time = 1.211 seconds 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.807 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.807 rmmod nvme_tcp 00:10:45.067 rmmod nvme_fabrics 00:10:45.067 rmmod nvme_keyring 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3237601 ']' 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3237601 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3237601 ']' 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3237601 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3237601 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3237601' 00:10:45.067 killing process with pid 3237601 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3237601 00:10:45.067 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3237601 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.326 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.231 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.231 00:10:47.231 real 0m9.516s 00:10:47.231 user 0m10.856s 00:10:47.231 sys 0m4.535s 00:10:47.231 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.231 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.231 ************************************ 00:10:47.231 END TEST nvmf_bdevio 00:10:47.231 ************************************ 00:10:47.231 05:39:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:47.231 00:10:47.231 real 4m28.794s 00:10:47.231 user 10m16.004s 00:10:47.231 sys 1m32.507s 00:10:47.231 05:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.231 05:39:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.231 ************************************ 00:10:47.231 END TEST nvmf_target_core 00:10:47.231 ************************************ 00:10:47.491 05:39:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:47.491 05:39:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:47.491 05:39:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.491 05:39:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.491 ************************************ 00:10:47.491 START TEST nvmf_target_extra 00:10:47.491 ************************************ 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:47.491 * Looking for test storage... 00:10:47.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.491 --rc genhtml_branch_coverage=1 00:10:47.491 --rc genhtml_function_coverage=1 00:10:47.491 --rc genhtml_legend=1 00:10:47.491 --rc geninfo_all_blocks=1 00:10:47.491 --rc geninfo_unexecuted_blocks=1 00:10:47.491 00:10:47.491 ' 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.491 --rc genhtml_branch_coverage=1 00:10:47.491 --rc genhtml_function_coverage=1 00:10:47.491 --rc genhtml_legend=1 00:10:47.491 --rc geninfo_all_blocks=1 00:10:47.491 --rc geninfo_unexecuted_blocks=1 00:10:47.491 00:10:47.491 ' 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.491 --rc genhtml_branch_coverage=1 00:10:47.491 --rc genhtml_function_coverage=1 00:10:47.491 --rc genhtml_legend=1 00:10:47.491 --rc geninfo_all_blocks=1 00:10:47.491 --rc geninfo_unexecuted_blocks=1 00:10:47.491 00:10:47.491 ' 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.491 --rc genhtml_branch_coverage=1 00:10:47.491 --rc genhtml_function_coverage=1 00:10:47.491 --rc genhtml_legend=1 00:10:47.491 --rc geninfo_all_blocks=1 00:10:47.491 --rc geninfo_unexecuted_blocks=1 00:10:47.491 00:10:47.491 ' 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.491 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.492 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:47.752 ************************************ 00:10:47.752 START TEST nvmf_example 00:10:47.752 ************************************ 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:47.752 * Looking for test storage... 00:10:47.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:47.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.752 --rc genhtml_branch_coverage=1 00:10:47.752 --rc genhtml_function_coverage=1 00:10:47.752 --rc genhtml_legend=1 00:10:47.752 --rc geninfo_all_blocks=1 00:10:47.752 --rc geninfo_unexecuted_blocks=1 00:10:47.752 00:10:47.752 ' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:47.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.752 --rc genhtml_branch_coverage=1 00:10:47.752 --rc genhtml_function_coverage=1 00:10:47.752 --rc genhtml_legend=1 00:10:47.752 --rc geninfo_all_blocks=1 00:10:47.752 --rc geninfo_unexecuted_blocks=1 00:10:47.752 00:10:47.752 ' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:47.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.752 --rc genhtml_branch_coverage=1 00:10:47.752 --rc genhtml_function_coverage=1 00:10:47.752 --rc genhtml_legend=1 00:10:47.752 --rc geninfo_all_blocks=1 00:10:47.752 --rc geninfo_unexecuted_blocks=1 00:10:47.752 00:10:47.752 ' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:47.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.752 --rc genhtml_branch_coverage=1 00:10:47.752 --rc genhtml_function_coverage=1 00:10:47.752 --rc genhtml_legend=1 00:10:47.752 --rc geninfo_all_blocks=1 00:10:47.752 --rc geninfo_unexecuted_blocks=1 00:10:47.752 00:10:47.752 ' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.752 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.753 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:54.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:54.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:54.323 Found net devices under 0000:af:00.0: cvl_0_0 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:54.323 Found net devices under 0000:af:00.1: cvl_0_1 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.323 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:54.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:10:54.323 00:10:54.323 --- 10.0.0.2 ping statistics --- 00:10:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.323 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:10:54.323 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:10:54.323 00:10:54.323 --- 10.0.0.1 ping statistics --- 00:10:54.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.324 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3241381 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3241381 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3241381 ']' 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.324 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:54.583 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:06.794 Initializing NVMe Controllers 00:11:06.794 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:06.794 Initialization complete. Launching workers. 00:11:06.794 ======================================================== 00:11:06.794 Latency(us) 00:11:06.794 Device Information : IOPS MiB/s Average min max 00:11:06.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18454.88 72.09 3467.28 684.75 15535.76 00:11:06.794 ======================================================== 00:11:06.794 Total : 18454.88 72.09 3467.28 684.75 15535.76 00:11:06.794 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.794 rmmod nvme_tcp 00:11:06.794 rmmod nvme_fabrics 00:11:06.794 rmmod nvme_keyring 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 3241381 ']' 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 3241381 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3241381 ']' 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3241381 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3241381 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3241381' 00:11:06.794 killing process with pid 3241381 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3241381 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3241381 00:11:06.794 nvmf threads initialize successfully 00:11:06.794 bdev subsystem init successfully 00:11:06.794 created a nvmf target service 00:11:06.794 create targets's poll groups done 00:11:06.794 all subsystems of target started 00:11:06.794 nvmf target is running 00:11:06.794 all subsystems of target stopped 00:11:06.794 destroy targets's poll groups done 00:11:06.794 destroyed the nvmf target service 00:11:06.794 bdev subsystem finish successfully 00:11:06.794 nvmf threads destroy successfully 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.794 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.054 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.054 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:07.054 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.054 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.054 00:11:07.054 real 0m19.519s 00:11:07.054 user 0m46.002s 00:11:07.054 sys 0m5.798s 00:11:07.054 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.054 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.054 ************************************ 00:11:07.054 END TEST nvmf_example 00:11:07.054 ************************************ 00:11:07.314 05:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:07.314 05:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.314 05:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.314 05:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:07.314 ************************************ 00:11:07.314 START TEST nvmf_filesystem 00:11:07.314 ************************************ 00:11:07.314 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:07.314 * Looking for test storage... 00:11:07.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:07.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.314 --rc genhtml_branch_coverage=1 00:11:07.314 --rc genhtml_function_coverage=1 00:11:07.314 --rc genhtml_legend=1 00:11:07.314 --rc geninfo_all_blocks=1 00:11:07.314 --rc geninfo_unexecuted_blocks=1 00:11:07.314 00:11:07.314 ' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:07.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.314 --rc genhtml_branch_coverage=1 00:11:07.314 --rc genhtml_function_coverage=1 00:11:07.314 --rc genhtml_legend=1 00:11:07.314 --rc geninfo_all_blocks=1 00:11:07.314 --rc geninfo_unexecuted_blocks=1 00:11:07.314 00:11:07.314 ' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:07.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.314 --rc genhtml_branch_coverage=1 00:11:07.314 --rc genhtml_function_coverage=1 00:11:07.314 --rc genhtml_legend=1 00:11:07.314 --rc geninfo_all_blocks=1 00:11:07.314 --rc geninfo_unexecuted_blocks=1 00:11:07.314 00:11:07.314 ' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:07.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.314 --rc genhtml_branch_coverage=1 00:11:07.314 --rc genhtml_function_coverage=1 00:11:07.314 --rc genhtml_legend=1 00:11:07.314 --rc geninfo_all_blocks=1 00:11:07.314 --rc geninfo_unexecuted_blocks=1 00:11:07.314 00:11:07.314 ' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:07.314 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:07.315 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:07.315 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:07.315 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:07.315 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:07.315 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:07.577 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:07.578 #define SPDK_CONFIG_H 00:11:07.578 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:07.578 #define SPDK_CONFIG_APPS 1 00:11:07.578 #define SPDK_CONFIG_ARCH native 00:11:07.578 #undef SPDK_CONFIG_ASAN 00:11:07.578 #undef SPDK_CONFIG_AVAHI 00:11:07.578 #undef SPDK_CONFIG_CET 00:11:07.578 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:07.578 #define SPDK_CONFIG_COVERAGE 1 00:11:07.578 #define SPDK_CONFIG_CROSS_PREFIX 00:11:07.578 #undef SPDK_CONFIG_CRYPTO 00:11:07.578 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:07.578 #undef SPDK_CONFIG_CUSTOMOCF 00:11:07.578 #undef SPDK_CONFIG_DAOS 00:11:07.578 #define SPDK_CONFIG_DAOS_DIR 00:11:07.578 #define SPDK_CONFIG_DEBUG 1 00:11:07.578 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:07.578 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:07.578 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:07.578 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:07.578 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:07.578 #undef SPDK_CONFIG_DPDK_UADK 00:11:07.578 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:07.578 #define SPDK_CONFIG_EXAMPLES 1 00:11:07.578 #undef SPDK_CONFIG_FC 00:11:07.578 #define SPDK_CONFIG_FC_PATH 00:11:07.578 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:07.578 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:07.578 #define SPDK_CONFIG_FSDEV 1 00:11:07.578 #undef SPDK_CONFIG_FUSE 00:11:07.578 #undef SPDK_CONFIG_FUZZER 00:11:07.578 #define SPDK_CONFIG_FUZZER_LIB 00:11:07.578 #undef SPDK_CONFIG_GOLANG 00:11:07.578 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:07.578 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:07.578 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:07.578 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:07.578 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:07.578 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:07.578 #undef SPDK_CONFIG_HAVE_LZ4 00:11:07.578 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:07.578 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:07.578 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:07.578 #define SPDK_CONFIG_IDXD 1 00:11:07.578 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:07.578 #undef SPDK_CONFIG_IPSEC_MB 00:11:07.578 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:07.578 #define SPDK_CONFIG_ISAL 1 00:11:07.578 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:07.578 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:07.578 #define SPDK_CONFIG_LIBDIR 00:11:07.578 #undef SPDK_CONFIG_LTO 00:11:07.578 #define SPDK_CONFIG_MAX_LCORES 128 00:11:07.578 #define SPDK_CONFIG_NVME_CUSE 1 00:11:07.578 #undef SPDK_CONFIG_OCF 00:11:07.578 #define SPDK_CONFIG_OCF_PATH 00:11:07.578 #define SPDK_CONFIG_OPENSSL_PATH 00:11:07.578 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:07.578 #define SPDK_CONFIG_PGO_DIR 00:11:07.578 #undef SPDK_CONFIG_PGO_USE 00:11:07.578 #define SPDK_CONFIG_PREFIX /usr/local 00:11:07.578 #undef SPDK_CONFIG_RAID5F 00:11:07.578 #undef SPDK_CONFIG_RBD 00:11:07.578 #define SPDK_CONFIG_RDMA 1 00:11:07.578 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:07.578 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:07.578 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:07.578 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:07.578 #define SPDK_CONFIG_SHARED 1 00:11:07.578 #undef SPDK_CONFIG_SMA 00:11:07.578 #define SPDK_CONFIG_TESTS 1 00:11:07.578 #undef SPDK_CONFIG_TSAN 00:11:07.578 #define SPDK_CONFIG_UBLK 1 00:11:07.578 #define SPDK_CONFIG_UBSAN 1 00:11:07.578 #undef SPDK_CONFIG_UNIT_TESTS 00:11:07.578 #undef SPDK_CONFIG_URING 00:11:07.578 #define SPDK_CONFIG_URING_PATH 00:11:07.578 #undef SPDK_CONFIG_URING_ZNS 00:11:07.578 #undef SPDK_CONFIG_USDT 00:11:07.578 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:07.578 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:07.578 #define SPDK_CONFIG_VFIO_USER 1 00:11:07.578 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:07.578 #define SPDK_CONFIG_VHOST 1 00:11:07.578 #define SPDK_CONFIG_VIRTIO 1 00:11:07.578 #undef SPDK_CONFIG_VTUNE 00:11:07.578 #define SPDK_CONFIG_VTUNE_DIR 00:11:07.578 #define SPDK_CONFIG_WERROR 1 00:11:07.578 #define SPDK_CONFIG_WPDK_DIR 00:11:07.578 #undef SPDK_CONFIG_XNVME 00:11:07.578 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:07.578 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:07.579 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:07.580 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3243773 ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3243773 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.RsBN0M 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RsBN0M/tests/target /tmp/spdk.RsBN0M 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=87405113344 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=95552417792 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8147304448 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47766175744 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47776206848 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19087466496 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19110486016 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23019520 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47775948800 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47776210944 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=262144 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=9555226624 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9555238912 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:07.581 * Looking for test storage... 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=87405113344 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10361896960 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:07.581 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:07.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.582 --rc genhtml_branch_coverage=1 00:11:07.582 --rc genhtml_function_coverage=1 00:11:07.582 --rc genhtml_legend=1 00:11:07.582 --rc geninfo_all_blocks=1 00:11:07.582 --rc geninfo_unexecuted_blocks=1 00:11:07.582 00:11:07.582 ' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:07.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.582 --rc genhtml_branch_coverage=1 00:11:07.582 --rc genhtml_function_coverage=1 00:11:07.582 --rc genhtml_legend=1 00:11:07.582 --rc geninfo_all_blocks=1 00:11:07.582 --rc geninfo_unexecuted_blocks=1 00:11:07.582 00:11:07.582 ' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:07.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.582 --rc genhtml_branch_coverage=1 00:11:07.582 --rc genhtml_function_coverage=1 00:11:07.582 --rc genhtml_legend=1 00:11:07.582 --rc geninfo_all_blocks=1 00:11:07.582 --rc geninfo_unexecuted_blocks=1 00:11:07.582 00:11:07.582 ' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:07.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.582 --rc genhtml_branch_coverage=1 00:11:07.582 --rc genhtml_function_coverage=1 00:11:07.582 --rc genhtml_legend=1 00:11:07.582 --rc geninfo_all_blocks=1 00:11:07.582 --rc geninfo_unexecuted_blocks=1 00:11:07.582 00:11:07.582 ' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.582 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.583 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.842 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:07.842 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:07.842 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.842 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.115 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:13.116 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:13.116 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:13.116 Found net devices under 0000:af:00.0: cvl_0_0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:13.116 Found net devices under 0000:af:00.1: cvl_0_1 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:13.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:11:13.116 00:11:13.116 --- 10.0.0.2 ping statistics --- 00:11:13.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.116 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:13.116 00:11:13.116 --- 10.0.0.1 ping statistics --- 00:11:13.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.116 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.116 ************************************ 00:11:13.116 START TEST nvmf_filesystem_no_in_capsule 00:11:13.116 ************************************ 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:13.116 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3246832 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3246832 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3246832 ']' 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.117 [2024-12-16 05:39:46.662126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:13.117 [2024-12-16 05:39:46.662168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.117 [2024-12-16 05:39:46.720712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.117 [2024-12-16 05:39:46.761443] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.117 [2024-12-16 05:39:46.761482] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.117 [2024-12-16 05:39:46.761489] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.117 [2024-12-16 05:39:46.761496] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.117 [2024-12-16 05:39:46.761501] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.117 [2024-12-16 05:39:46.761547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.117 [2024-12-16 05:39:46.761631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.117 [2024-12-16 05:39:46.761721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.117 [2024-12-16 05:39:46.761722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.117 [2024-12-16 05:39:46.900413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.117 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.376 Malloc1 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.376 [2024-12-16 05:39:47.043873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:13.376 { 00:11:13.376 "name": "Malloc1", 00:11:13.376 "aliases": [ 00:11:13.376 "ae70e9a4-0928-41fd-b577-1e12bafc7ca9" 00:11:13.376 ], 00:11:13.376 "product_name": "Malloc disk", 00:11:13.376 "block_size": 512, 00:11:13.376 "num_blocks": 1048576, 00:11:13.376 "uuid": "ae70e9a4-0928-41fd-b577-1e12bafc7ca9", 00:11:13.376 "assigned_rate_limits": { 00:11:13.376 "rw_ios_per_sec": 0, 00:11:13.376 "rw_mbytes_per_sec": 0, 00:11:13.376 "r_mbytes_per_sec": 0, 00:11:13.376 "w_mbytes_per_sec": 0 00:11:13.376 }, 00:11:13.376 "claimed": true, 00:11:13.376 "claim_type": "exclusive_write", 00:11:13.376 "zoned": false, 00:11:13.376 "supported_io_types": { 00:11:13.376 "read": true, 00:11:13.376 "write": true, 00:11:13.376 "unmap": true, 00:11:13.376 "flush": true, 00:11:13.376 "reset": true, 00:11:13.376 "nvme_admin": false, 00:11:13.376 "nvme_io": false, 00:11:13.376 "nvme_io_md": false, 00:11:13.376 "write_zeroes": true, 00:11:13.376 "zcopy": true, 00:11:13.376 "get_zone_info": false, 00:11:13.376 "zone_management": false, 00:11:13.376 "zone_append": false, 00:11:13.376 "compare": false, 00:11:13.376 "compare_and_write": false, 00:11:13.376 "abort": true, 00:11:13.376 "seek_hole": false, 00:11:13.376 "seek_data": false, 00:11:13.376 "copy": true, 00:11:13.376 "nvme_iov_md": false 00:11:13.376 }, 00:11:13.376 "memory_domains": [ 00:11:13.376 { 00:11:13.376 "dma_device_id": "system", 00:11:13.376 "dma_device_type": 1 00:11:13.376 }, 00:11:13.376 { 00:11:13.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.376 "dma_device_type": 2 00:11:13.376 } 00:11:13.376 ], 00:11:13.376 "driver_specific": {} 00:11:13.376 } 00:11:13.376 ]' 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:13.376 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.760 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.760 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:14.760 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.760 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:14.760 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:16.755 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:16.756 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:16.756 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:17.014 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:17.951 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:17.951 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:17.951 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:17.951 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.951 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.209 ************************************ 00:11:18.209 START TEST filesystem_ext4 00:11:18.209 ************************************ 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:18.209 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:18.209 mke2fs 1.47.0 (5-Feb-2023) 00:11:18.209 Discarding device blocks: 0/522240 done 00:11:18.209 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:18.209 Filesystem UUID: e112a5ea-b9aa-4975-a365-0f652cc13c56 00:11:18.209 Superblock backups stored on blocks: 00:11:18.209 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:18.209 00:11:18.209 Allocating group tables: 0/64 done 00:11:18.209 Writing inode tables: 0/64 done 00:11:18.468 Creating journal (8192 blocks): done 00:11:20.670 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:20.670 00:11:20.670 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:20.670 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3246832 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.238 00:11:27.238 real 0m8.543s 00:11:27.238 user 0m0.026s 00:11:27.238 sys 0m0.076s 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:27.238 ************************************ 00:11:27.238 END TEST filesystem_ext4 00:11:27.238 ************************************ 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.238 ************************************ 00:11:27.238 START TEST filesystem_btrfs 00:11:27.238 ************************************ 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.238 btrfs-progs v6.8.1 00:11:27.238 See https://btrfs.readthedocs.io for more information. 00:11:27.238 00:11:27.238 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.238 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.238 this does not affect your deployments: 00:11:27.238 - DUP for metadata (-m dup) 00:11:27.238 - enabled no-holes (-O no-holes) 00:11:27.238 - enabled free-space-tree (-R free-space-tree) 00:11:27.238 00:11:27.238 Label: (null) 00:11:27.238 UUID: 1038b7f1-6def-4a89-9ab9-372ea6ea73b1 00:11:27.238 Node size: 16384 00:11:27.238 Sector size: 4096 (CPU page size: 4096) 00:11:27.238 Filesystem size: 510.00MiB 00:11:27.238 Block group profiles: 00:11:27.238 Data: single 8.00MiB 00:11:27.238 Metadata: DUP 32.00MiB 00:11:27.238 System: DUP 8.00MiB 00:11:27.238 SSD detected: yes 00:11:27.238 Zoned device: no 00:11:27.238 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.238 Checksum: crc32c 00:11:27.238 Number of devices: 1 00:11:27.238 Devices: 00:11:27.238 ID SIZE PATH 00:11:27.238 1 510.00MiB /dev/nvme0n1p1 00:11:27.238 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:27.238 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3246832 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.498 00:11:27.498 real 0m0.751s 00:11:27.498 user 0m0.034s 00:11:27.498 sys 0m0.107s 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.498 ************************************ 00:11:27.498 END TEST filesystem_btrfs 00:11:27.498 ************************************ 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.498 ************************************ 00:11:27.498 START TEST filesystem_xfs 00:11:27.498 ************************************ 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:27.498 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:27.498 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:27.498 = sectsz=512 attr=2, projid32bit=1 00:11:27.498 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:27.498 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:27.498 data = bsize=4096 blocks=130560, imaxpct=25 00:11:27.498 = sunit=0 swidth=0 blks 00:11:27.498 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:27.498 log =internal log bsize=4096 blocks=16384, version=2 00:11:27.498 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:27.498 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:28.873 Discarding blocks...Done. 00:11:28.873 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:28.873 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3246832 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.410 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.410 00:11:31.410 real 0m3.585s 00:11:31.411 user 0m0.027s 00:11:31.411 sys 0m0.071s 00:11:31.411 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.411 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.411 ************************************ 00:11:31.411 END TEST filesystem_xfs 00:11:31.411 ************************************ 00:11:31.411 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:31.411 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:31.411 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3246832 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3246832 ']' 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3246832 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3246832 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3246832' 00:11:31.670 killing process with pid 3246832 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3246832 00:11:31.670 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3246832 00:11:31.929 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:31.929 00:11:31.929 real 0m19.143s 00:11:31.929 user 1m15.378s 00:11:31.929 sys 0m1.469s 00:11:31.929 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.929 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.929 ************************************ 00:11:31.929 END TEST nvmf_filesystem_no_in_capsule 00:11:31.929 ************************************ 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.188 ************************************ 00:11:32.188 START TEST nvmf_filesystem_in_capsule 00:11:32.188 ************************************ 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3250161 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3250161 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3250161 ']' 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.188 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.188 [2024-12-16 05:40:05.890962] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:32.188 [2024-12-16 05:40:05.891004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.188 [2024-12-16 05:40:05.952153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.188 [2024-12-16 05:40:05.992249] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.188 [2024-12-16 05:40:05.992287] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.188 [2024-12-16 05:40:05.992296] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.188 [2024-12-16 05:40:05.992302] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.188 [2024-12-16 05:40:05.992309] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.188 [2024-12-16 05:40:05.992365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.188 [2024-12-16 05:40:05.992385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.188 [2024-12-16 05:40:05.992471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.188 [2024-12-16 05:40:05.992472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.447 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.447 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:32.447 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:32.447 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.447 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.447 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.447 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.448 [2024-12-16 05:40:06.144825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.448 Malloc1 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.448 [2024-12-16 05:40:06.287724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.448 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:32.707 { 00:11:32.707 "name": "Malloc1", 00:11:32.707 "aliases": [ 00:11:32.707 "cc8686a2-4b96-4399-b45c-e56c8866e097" 00:11:32.707 ], 00:11:32.707 "product_name": "Malloc disk", 00:11:32.707 "block_size": 512, 00:11:32.707 "num_blocks": 1048576, 00:11:32.707 "uuid": "cc8686a2-4b96-4399-b45c-e56c8866e097", 00:11:32.707 "assigned_rate_limits": { 00:11:32.707 "rw_ios_per_sec": 0, 00:11:32.707 "rw_mbytes_per_sec": 0, 00:11:32.707 "r_mbytes_per_sec": 0, 00:11:32.707 "w_mbytes_per_sec": 0 00:11:32.707 }, 00:11:32.707 "claimed": true, 00:11:32.707 "claim_type": "exclusive_write", 00:11:32.707 "zoned": false, 00:11:32.707 "supported_io_types": { 00:11:32.707 "read": true, 00:11:32.707 "write": true, 00:11:32.707 "unmap": true, 00:11:32.707 "flush": true, 00:11:32.707 "reset": true, 00:11:32.707 "nvme_admin": false, 00:11:32.707 "nvme_io": false, 00:11:32.707 "nvme_io_md": false, 00:11:32.707 "write_zeroes": true, 00:11:32.707 "zcopy": true, 00:11:32.707 "get_zone_info": false, 00:11:32.707 "zone_management": false, 00:11:32.707 "zone_append": false, 00:11:32.707 "compare": false, 00:11:32.707 "compare_and_write": false, 00:11:32.707 "abort": true, 00:11:32.707 "seek_hole": false, 00:11:32.707 "seek_data": false, 00:11:32.707 "copy": true, 00:11:32.707 "nvme_iov_md": false 00:11:32.707 }, 00:11:32.707 "memory_domains": [ 00:11:32.707 { 00:11:32.707 "dma_device_id": "system", 00:11:32.707 "dma_device_type": 1 00:11:32.707 }, 00:11:32.707 { 00:11:32.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.707 "dma_device_type": 2 00:11:32.707 } 00:11:32.707 ], 00:11:32.707 "driver_specific": {} 00:11:32.707 } 00:11:32.707 ]' 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:32.707 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.084 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.084 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:34.084 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.084 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:34.084 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:35.987 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:36.245 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:37.179 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:37.179 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:37.179 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:37.179 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.179 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.179 ************************************ 00:11:37.179 START TEST filesystem_in_capsule_ext4 00:11:37.179 ************************************ 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:37.179 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:37.179 mke2fs 1.47.0 (5-Feb-2023) 00:11:37.438 Discarding device blocks: 0/522240 done 00:11:37.438 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:37.438 Filesystem UUID: 6b170aa0-6a7f-4f4a-9bbe-f13e62e35233 00:11:37.438 Superblock backups stored on blocks: 00:11:37.438 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:37.438 00:11:37.438 Allocating group tables: 0/64 done 00:11:37.438 Writing inode tables: 0/64 done 00:11:40.233 Creating journal (8192 blocks): done 00:11:42.434 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:11:42.434 00:11:42.434 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:42.434 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3250161 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.000 00:11:49.000 real 0m10.770s 00:11:49.000 user 0m0.024s 00:11:49.000 sys 0m0.081s 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:49.000 ************************************ 00:11:49.000 END TEST filesystem_in_capsule_ext4 00:11:49.000 ************************************ 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.000 ************************************ 00:11:49.000 START TEST filesystem_in_capsule_btrfs 00:11:49.000 ************************************ 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.000 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:49.001 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:49.001 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.001 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:49.001 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:49.001 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:49.001 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:49.001 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:49.001 btrfs-progs v6.8.1 00:11:49.001 See https://btrfs.readthedocs.io for more information. 00:11:49.001 00:11:49.001 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:49.001 NOTE: several default settings have changed in version 5.15, please make sure 00:11:49.001 this does not affect your deployments: 00:11:49.001 - DUP for metadata (-m dup) 00:11:49.001 - enabled no-holes (-O no-holes) 00:11:49.001 - enabled free-space-tree (-R free-space-tree) 00:11:49.001 00:11:49.001 Label: (null) 00:11:49.001 UUID: 5506b9b3-7f74-4137-b178-5607ff17dbae 00:11:49.001 Node size: 16384 00:11:49.001 Sector size: 4096 (CPU page size: 4096) 00:11:49.001 Filesystem size: 510.00MiB 00:11:49.001 Block group profiles: 00:11:49.001 Data: single 8.00MiB 00:11:49.001 Metadata: DUP 32.00MiB 00:11:49.001 System: DUP 8.00MiB 00:11:49.001 SSD detected: yes 00:11:49.001 Zoned device: no 00:11:49.001 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:49.001 Checksum: crc32c 00:11:49.001 Number of devices: 1 00:11:49.001 Devices: 00:11:49.001 ID SIZE PATH 00:11:49.001 1 510.00MiB /dev/nvme0n1p1 00:11:49.001 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3250161 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.001 00:11:49.001 real 0m0.531s 00:11:49.001 user 0m0.027s 00:11:49.001 sys 0m0.113s 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.001 ************************************ 00:11:49.001 END TEST filesystem_in_capsule_btrfs 00:11:49.001 ************************************ 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.001 ************************************ 00:11:49.001 START TEST filesystem_in_capsule_xfs 00:11:49.001 ************************************ 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:49.001 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:49.001 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:49.001 = sectsz=512 attr=2, projid32bit=1 00:11:49.001 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:49.001 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:49.001 data = bsize=4096 blocks=130560, imaxpct=25 00:11:49.001 = sunit=0 swidth=0 blks 00:11:49.001 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:49.001 log =internal log bsize=4096 blocks=16384, version=2 00:11:49.001 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:49.001 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:49.569 Discarding blocks...Done. 00:11:49.569 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:49.569 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3250161 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.104 00:11:52.104 real 0m3.407s 00:11:52.104 user 0m0.035s 00:11:52.104 sys 0m0.062s 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:52.104 ************************************ 00:11:52.104 END TEST filesystem_in_capsule_xfs 00:11:52.104 ************************************ 00:11:52.104 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:52.363 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:52.363 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3250161 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3250161 ']' 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3250161 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3250161 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3250161' 00:11:52.622 killing process with pid 3250161 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3250161 00:11:52.622 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3250161 00:11:52.881 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:52.881 00:11:52.881 real 0m20.887s 00:11:52.881 user 1m22.314s 00:11:52.881 sys 0m1.466s 00:11:52.881 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.881 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.881 ************************************ 00:11:52.881 END TEST nvmf_filesystem_in_capsule 00:11:52.881 ************************************ 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.141 rmmod nvme_tcp 00:11:53.141 rmmod nvme_fabrics 00:11:53.141 rmmod nvme_keyring 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.141 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.046 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.046 00:11:55.046 real 0m47.914s 00:11:55.046 user 2m39.463s 00:11:55.046 sys 0m7.041s 00:11:55.046 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.046 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:55.046 ************************************ 00:11:55.046 END TEST nvmf_filesystem 00:11:55.046 ************************************ 00:11:55.306 05:40:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:55.306 05:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:55.306 05:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.306 05:40:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.306 ************************************ 00:11:55.306 START TEST nvmf_target_discovery 00:11:55.306 ************************************ 00:11:55.306 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:55.306 * Looking for test storage... 00:11:55.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:55.306 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:55.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.307 --rc genhtml_branch_coverage=1 00:11:55.307 --rc genhtml_function_coverage=1 00:11:55.307 --rc genhtml_legend=1 00:11:55.307 --rc geninfo_all_blocks=1 00:11:55.307 --rc geninfo_unexecuted_blocks=1 00:11:55.307 00:11:55.307 ' 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:55.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.307 --rc genhtml_branch_coverage=1 00:11:55.307 --rc genhtml_function_coverage=1 00:11:55.307 --rc genhtml_legend=1 00:11:55.307 --rc geninfo_all_blocks=1 00:11:55.307 --rc geninfo_unexecuted_blocks=1 00:11:55.307 00:11:55.307 ' 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:55.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.307 --rc genhtml_branch_coverage=1 00:11:55.307 --rc genhtml_function_coverage=1 00:11:55.307 --rc genhtml_legend=1 00:11:55.307 --rc geninfo_all_blocks=1 00:11:55.307 --rc geninfo_unexecuted_blocks=1 00:11:55.307 00:11:55.307 ' 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:55.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.307 --rc genhtml_branch_coverage=1 00:11:55.307 --rc genhtml_function_coverage=1 00:11:55.307 --rc genhtml_legend=1 00:11:55.307 --rc geninfo_all_blocks=1 00:11:55.307 --rc geninfo_unexecuted_blocks=1 00:11:55.307 00:11:55.307 ' 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.307 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.566 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.566 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.567 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:00.934 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:00.934 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.934 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:00.935 Found net devices under 0000:af:00.0: cvl_0_0 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:00.935 Found net devices under 0000:af:00.1: cvl_0_1 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:12:00.935 00:12:00.935 --- 10.0.0.2 ping statistics --- 00:12:00.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.935 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:12:00.935 00:12:00.935 --- 10.0.0.1 ping statistics --- 00:12:00.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.935 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=3257161 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 3257161 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3257161 ']' 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.935 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.935 [2024-12-16 05:40:34.688647] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:00.935 [2024-12-16 05:40:34.688696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.935 [2024-12-16 05:40:34.750413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.195 [2024-12-16 05:40:34.791344] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.195 [2024-12-16 05:40:34.791383] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.195 [2024-12-16 05:40:34.791391] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.195 [2024-12-16 05:40:34.791399] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.195 [2024-12-16 05:40:34.791405] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.195 [2024-12-16 05:40:34.791466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.195 [2024-12-16 05:40:34.791543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.195 [2024-12-16 05:40:34.791568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.195 [2024-12-16 05:40:34.791570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 [2024-12-16 05:40:34.943909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 Null1 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 [2024-12-16 05:40:34.992230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 Null2 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.195 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.196 Null3 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.196 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 Null4 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.461 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.462 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:01.462 00:12:01.462 Discovery Log Number of Records 6, Generation counter 6 00:12:01.462 =====Discovery Log Entry 0====== 00:12:01.462 trtype: tcp 00:12:01.462 adrfam: ipv4 00:12:01.462 subtype: current discovery subsystem 00:12:01.462 treq: not required 00:12:01.462 portid: 0 00:12:01.462 trsvcid: 4420 00:12:01.462 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:01.462 traddr: 10.0.0.2 00:12:01.462 eflags: explicit discovery connections, duplicate discovery information 00:12:01.462 sectype: none 00:12:01.462 =====Discovery Log Entry 1====== 00:12:01.462 trtype: tcp 00:12:01.462 adrfam: ipv4 00:12:01.462 subtype: nvme subsystem 00:12:01.462 treq: not required 00:12:01.462 portid: 0 00:12:01.462 trsvcid: 4420 00:12:01.462 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:01.462 traddr: 10.0.0.2 00:12:01.462 eflags: none 00:12:01.462 sectype: none 00:12:01.462 =====Discovery Log Entry 2====== 00:12:01.462 trtype: tcp 00:12:01.462 adrfam: ipv4 00:12:01.462 subtype: nvme subsystem 00:12:01.462 treq: not required 00:12:01.462 portid: 0 00:12:01.462 trsvcid: 4420 00:12:01.463 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:01.463 traddr: 10.0.0.2 00:12:01.463 eflags: none 00:12:01.463 sectype: none 00:12:01.463 =====Discovery Log Entry 3====== 00:12:01.463 trtype: tcp 00:12:01.463 adrfam: ipv4 00:12:01.463 subtype: nvme subsystem 00:12:01.463 treq: not required 00:12:01.463 portid: 0 00:12:01.463 trsvcid: 4420 00:12:01.463 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:01.463 traddr: 10.0.0.2 00:12:01.463 eflags: none 00:12:01.463 sectype: none 00:12:01.463 =====Discovery Log Entry 4====== 00:12:01.463 trtype: tcp 00:12:01.463 adrfam: ipv4 00:12:01.463 subtype: nvme subsystem 00:12:01.463 treq: not required 00:12:01.463 portid: 0 00:12:01.463 trsvcid: 4420 00:12:01.463 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:01.463 traddr: 10.0.0.2 00:12:01.463 eflags: none 00:12:01.463 sectype: none 00:12:01.463 =====Discovery Log Entry 5====== 00:12:01.463 trtype: tcp 00:12:01.463 adrfam: ipv4 00:12:01.463 subtype: discovery subsystem referral 00:12:01.463 treq: not required 00:12:01.463 portid: 0 00:12:01.463 trsvcid: 4430 00:12:01.463 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:01.463 traddr: 10.0.0.2 00:12:01.463 eflags: none 00:12:01.463 sectype: none 00:12:01.463 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:01.463 Perform nvmf subsystem discovery via RPC 00:12:01.463 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:01.463 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.463 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.463 [ 00:12:01.463 { 00:12:01.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:01.463 "subtype": "Discovery", 00:12:01.463 "listen_addresses": [ 00:12:01.463 { 00:12:01.463 "trtype": "TCP", 00:12:01.463 "adrfam": "IPv4", 00:12:01.463 "traddr": "10.0.0.2", 00:12:01.463 "trsvcid": "4420" 00:12:01.463 } 00:12:01.463 ], 00:12:01.463 "allow_any_host": true, 00:12:01.463 "hosts": [] 00:12:01.464 }, 00:12:01.464 { 00:12:01.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.464 "subtype": "NVMe", 00:12:01.464 "listen_addresses": [ 00:12:01.464 { 00:12:01.464 "trtype": "TCP", 00:12:01.464 "adrfam": "IPv4", 00:12:01.464 "traddr": "10.0.0.2", 00:12:01.464 "trsvcid": "4420" 00:12:01.464 } 00:12:01.464 ], 00:12:01.464 "allow_any_host": true, 00:12:01.464 "hosts": [], 00:12:01.464 "serial_number": "SPDK00000000000001", 00:12:01.464 "model_number": "SPDK bdev Controller", 00:12:01.464 "max_namespaces": 32, 00:12:01.464 "min_cntlid": 1, 00:12:01.464 "max_cntlid": 65519, 00:12:01.464 "namespaces": [ 00:12:01.464 { 00:12:01.464 "nsid": 1, 00:12:01.464 "bdev_name": "Null1", 00:12:01.464 "name": "Null1", 00:12:01.464 "nguid": "7985831E3E9E4574B3E0A8ED2C967748", 00:12:01.464 "uuid": "7985831e-3e9e-4574-b3e0-a8ed2c967748" 00:12:01.464 } 00:12:01.464 ] 00:12:01.464 }, 00:12:01.464 { 00:12:01.464 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:01.464 "subtype": "NVMe", 00:12:01.464 "listen_addresses": [ 00:12:01.464 { 00:12:01.464 "trtype": "TCP", 00:12:01.464 "adrfam": "IPv4", 00:12:01.464 "traddr": "10.0.0.2", 00:12:01.464 "trsvcid": "4420" 00:12:01.464 } 00:12:01.464 ], 00:12:01.464 "allow_any_host": true, 00:12:01.464 "hosts": [], 00:12:01.464 "serial_number": "SPDK00000000000002", 00:12:01.467 "model_number": "SPDK bdev Controller", 00:12:01.467 "max_namespaces": 32, 00:12:01.468 "min_cntlid": 1, 00:12:01.468 "max_cntlid": 65519, 00:12:01.468 "namespaces": [ 00:12:01.468 { 00:12:01.468 "nsid": 1, 00:12:01.468 "bdev_name": "Null2", 00:12:01.468 "name": "Null2", 00:12:01.468 "nguid": "A060177FCF5E48BD99600DF103950621", 00:12:01.468 "uuid": "a060177f-cf5e-48bd-9960-0df103950621" 00:12:01.468 } 00:12:01.468 ] 00:12:01.468 }, 00:12:01.468 { 00:12:01.468 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:01.468 "subtype": "NVMe", 00:12:01.468 "listen_addresses": [ 00:12:01.468 { 00:12:01.468 "trtype": "TCP", 00:12:01.468 "adrfam": "IPv4", 00:12:01.468 "traddr": "10.0.0.2", 00:12:01.468 "trsvcid": "4420" 00:12:01.468 } 00:12:01.468 ], 00:12:01.468 "allow_any_host": true, 00:12:01.468 "hosts": [], 00:12:01.468 "serial_number": "SPDK00000000000003", 00:12:01.468 "model_number": "SPDK bdev Controller", 00:12:01.468 "max_namespaces": 32, 00:12:01.468 "min_cntlid": 1, 00:12:01.468 "max_cntlid": 65519, 00:12:01.468 "namespaces": [ 00:12:01.468 { 00:12:01.468 "nsid": 1, 00:12:01.468 "bdev_name": "Null3", 00:12:01.468 "name": "Null3", 00:12:01.468 "nguid": "73490A8C365541B4858F1231C801E824", 00:12:01.468 "uuid": "73490a8c-3655-41b4-858f-1231c801e824" 00:12:01.468 } 00:12:01.468 ] 00:12:01.468 }, 00:12:01.468 { 00:12:01.468 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:01.468 "subtype": "NVMe", 00:12:01.468 "listen_addresses": [ 00:12:01.468 { 00:12:01.468 "trtype": "TCP", 00:12:01.468 "adrfam": "IPv4", 00:12:01.468 "traddr": "10.0.0.2", 00:12:01.468 "trsvcid": "4420" 00:12:01.468 } 00:12:01.468 ], 00:12:01.468 "allow_any_host": true, 00:12:01.468 "hosts": [], 00:12:01.468 "serial_number": "SPDK00000000000004", 00:12:01.468 "model_number": "SPDK bdev Controller", 00:12:01.468 "max_namespaces": 32, 00:12:01.468 "min_cntlid": 1, 00:12:01.468 "max_cntlid": 65519, 00:12:01.468 "namespaces": [ 00:12:01.468 { 00:12:01.468 "nsid": 1, 00:12:01.468 "bdev_name": "Null4", 00:12:01.468 "name": "Null4", 00:12:01.468 "nguid": "2CF57BAD5FBB4F999AA8A4749C2A1883", 00:12:01.468 "uuid": "2cf57bad-5fbb-4f99-9aa8-a4749c2a1883" 00:12:01.468 } 00:12:01.468 ] 00:12:01.468 } 00:12:01.469 ] 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:01.469 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:01.470 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.728 rmmod nvme_tcp 00:12:01.728 rmmod nvme_fabrics 00:12:01.728 rmmod nvme_keyring 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 3257161 ']' 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 3257161 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3257161 ']' 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3257161 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3257161 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3257161' 00:12:01.728 killing process with pid 3257161 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3257161 00:12:01.728 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3257161 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.987 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.523 00:12:04.523 real 0m8.793s 00:12:04.523 user 0m5.300s 00:12:04.523 sys 0m4.404s 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.523 ************************************ 00:12:04.523 END TEST nvmf_target_discovery 00:12:04.523 ************************************ 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.523 ************************************ 00:12:04.523 START TEST nvmf_referrals 00:12:04.523 ************************************ 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:04.523 * Looking for test storage... 00:12:04.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.523 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.523 --rc genhtml_branch_coverage=1 00:12:04.523 --rc genhtml_function_coverage=1 00:12:04.523 --rc genhtml_legend=1 00:12:04.523 --rc geninfo_all_blocks=1 00:12:04.523 --rc geninfo_unexecuted_blocks=1 00:12:04.523 00:12:04.523 ' 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.523 --rc genhtml_branch_coverage=1 00:12:04.523 --rc genhtml_function_coverage=1 00:12:04.523 --rc genhtml_legend=1 00:12:04.523 --rc geninfo_all_blocks=1 00:12:04.523 --rc geninfo_unexecuted_blocks=1 00:12:04.523 00:12:04.523 ' 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.523 --rc genhtml_branch_coverage=1 00:12:04.523 --rc genhtml_function_coverage=1 00:12:04.523 --rc genhtml_legend=1 00:12:04.523 --rc geninfo_all_blocks=1 00:12:04.523 --rc geninfo_unexecuted_blocks=1 00:12:04.523 00:12:04.523 ' 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:04.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.523 --rc genhtml_branch_coverage=1 00:12:04.523 --rc genhtml_function_coverage=1 00:12:04.523 --rc genhtml_legend=1 00:12:04.523 --rc geninfo_all_blocks=1 00:12:04.523 --rc geninfo_unexecuted_blocks=1 00:12:04.523 00:12:04.523 ' 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.523 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.524 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:09.801 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:09.801 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:09.801 Found net devices under 0000:af:00.0: cvl_0_0 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:09.801 Found net devices under 0000:af:00.1: cvl_0_1 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.801 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.802 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.802 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.802 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.802 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.802 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.802 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.060 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.060 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.060 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.060 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.060 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.060 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:12:10.061 00:12:10.061 --- 10.0.0.2 ping statistics --- 00:12:10.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.061 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:12:10.061 00:12:10.061 --- 10.0.0.1 ping statistics --- 00:12:10.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.061 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=3260835 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 3260835 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3260835 ']' 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:10.061 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.320 [2024-12-16 05:40:43.947866] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:10.320 [2024-12-16 05:40:43.947910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.320 [2024-12-16 05:40:44.006421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.320 [2024-12-16 05:40:44.046941] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.320 [2024-12-16 05:40:44.046984] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.320 [2024-12-16 05:40:44.046993] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.320 [2024-12-16 05:40:44.046999] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.320 [2024-12-16 05:40:44.047005] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.320 [2024-12-16 05:40:44.047053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.320 [2024-12-16 05:40:44.047132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.320 [2024-12-16 05:40:44.047155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.320 [2024-12-16 05:40:44.047156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.320 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.320 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:10.320 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:10.320 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:10.320 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.579 [2024-12-16 05:40:44.196726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.579 [2024-12-16 05:40:44.212963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.579 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:10.580 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:10.838 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:10.838 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:10.838 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:10.838 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.838 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.838 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.838 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:10.839 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.098 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.364 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:11.364 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:11.364 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:11.364 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:11.364 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:11.364 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.364 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:11.364 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:11.364 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:11.364 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:11.364 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:11.364 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.364 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.623 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.624 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.882 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:11.882 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:11.882 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:11.882 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:11.882 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:11.882 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.882 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.400 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.400 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.660 rmmod nvme_tcp 00:12:12.660 rmmod nvme_fabrics 00:12:12.660 rmmod nvme_keyring 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 3260835 ']' 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 3260835 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3260835 ']' 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3260835 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3260835 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3260835' 00:12:12.660 killing process with pid 3260835 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3260835 00:12:12.660 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3260835 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.920 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.825 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.825 00:12:14.825 real 0m10.828s 00:12:14.825 user 0m12.574s 00:12:14.825 sys 0m5.122s 00:12:14.825 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.825 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.825 ************************************ 00:12:14.825 END TEST nvmf_referrals 00:12:14.825 ************************************ 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.085 ************************************ 00:12:15.085 START TEST nvmf_connect_disconnect 00:12:15.085 ************************************ 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:15.085 * Looking for test storage... 00:12:15.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.085 --rc genhtml_branch_coverage=1 00:12:15.085 --rc genhtml_function_coverage=1 00:12:15.085 --rc genhtml_legend=1 00:12:15.085 --rc geninfo_all_blocks=1 00:12:15.085 --rc geninfo_unexecuted_blocks=1 00:12:15.085 00:12:15.085 ' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.085 --rc genhtml_branch_coverage=1 00:12:15.085 --rc genhtml_function_coverage=1 00:12:15.085 --rc genhtml_legend=1 00:12:15.085 --rc geninfo_all_blocks=1 00:12:15.085 --rc geninfo_unexecuted_blocks=1 00:12:15.085 00:12:15.085 ' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.085 --rc genhtml_branch_coverage=1 00:12:15.085 --rc genhtml_function_coverage=1 00:12:15.085 --rc genhtml_legend=1 00:12:15.085 --rc geninfo_all_blocks=1 00:12:15.085 --rc geninfo_unexecuted_blocks=1 00:12:15.085 00:12:15.085 ' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.085 --rc genhtml_branch_coverage=1 00:12:15.085 --rc genhtml_function_coverage=1 00:12:15.085 --rc genhtml_legend=1 00:12:15.085 --rc geninfo_all_blocks=1 00:12:15.085 --rc geninfo_unexecuted_blocks=1 00:12:15.085 00:12:15.085 ' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.085 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.086 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:20.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:20.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.356 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:20.357 Found net devices under 0000:af:00.0: cvl_0_0 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:20.357 Found net devices under 0000:af:00.1: cvl_0_1 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.357 05:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.357 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.357 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.357 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.357 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:12:20.617 00:12:20.617 --- 10.0.0.2 ping statistics --- 00:12:20.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.617 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:12:20.617 00:12:20.617 --- 10.0.0.1 ping statistics --- 00:12:20.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.617 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=3264832 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 3264832 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3264832 ']' 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.617 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.617 [2024-12-16 05:40:54.320966] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:20.617 [2024-12-16 05:40:54.321012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.617 [2024-12-16 05:40:54.380461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.617 [2024-12-16 05:40:54.419901] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.617 [2024-12-16 05:40:54.419941] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.617 [2024-12-16 05:40:54.419949] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.617 [2024-12-16 05:40:54.419955] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.617 [2024-12-16 05:40:54.419962] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.617 [2024-12-16 05:40:54.420014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.617 [2024-12-16 05:40:54.420094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.617 [2024-12-16 05:40:54.420160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.617 [2024-12-16 05:40:54.420160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.876 [2024-12-16 05:40:54.566308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.876 [2024-12-16 05:40:54.617850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:20.876 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:23.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:12.771 rmmod nvme_tcp 00:16:12.771 rmmod nvme_fabrics 00:16:12.771 rmmod nvme_keyring 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 3264832 ']' 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 3264832 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3264832 ']' 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3264832 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.771 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3264832 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3264832' 00:16:13.031 killing process with pid 3264832 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3264832 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3264832 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.031 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:15.567 00:16:15.567 real 4m0.193s 00:16:15.567 user 15m19.257s 00:16:15.567 sys 0m24.632s 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:15.567 ************************************ 00:16:15.567 END TEST nvmf_connect_disconnect 00:16:15.567 ************************************ 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:15.567 ************************************ 00:16:15.567 START TEST nvmf_multitarget 00:16:15.567 ************************************ 00:16:15.567 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:15.567 * Looking for test storage... 00:16:15.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.568 --rc genhtml_branch_coverage=1 00:16:15.568 --rc genhtml_function_coverage=1 00:16:15.568 --rc genhtml_legend=1 00:16:15.568 --rc geninfo_all_blocks=1 00:16:15.568 --rc geninfo_unexecuted_blocks=1 00:16:15.568 00:16:15.568 ' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.568 --rc genhtml_branch_coverage=1 00:16:15.568 --rc genhtml_function_coverage=1 00:16:15.568 --rc genhtml_legend=1 00:16:15.568 --rc geninfo_all_blocks=1 00:16:15.568 --rc geninfo_unexecuted_blocks=1 00:16:15.568 00:16:15.568 ' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.568 --rc genhtml_branch_coverage=1 00:16:15.568 --rc genhtml_function_coverage=1 00:16:15.568 --rc genhtml_legend=1 00:16:15.568 --rc geninfo_all_blocks=1 00:16:15.568 --rc geninfo_unexecuted_blocks=1 00:16:15.568 00:16:15.568 ' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.568 --rc genhtml_branch_coverage=1 00:16:15.568 --rc genhtml_function_coverage=1 00:16:15.568 --rc genhtml_legend=1 00:16:15.568 --rc geninfo_all_blocks=1 00:16:15.568 --rc geninfo_unexecuted_blocks=1 00:16:15.568 00:16:15.568 ' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:15.568 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:15.569 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:20.843 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:20.843 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:20.843 Found net devices under 0000:af:00.0: cvl_0_0 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:20.843 Found net devices under 0000:af:00.1: cvl_0_1 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:20.843 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:20.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:16:20.844 00:16:20.844 --- 10.0.0.2 ping statistics --- 00:16:20.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.844 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:16:20.844 00:16:20.844 --- 10.0.0.1 ping statistics --- 00:16:20.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.844 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=3307872 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 3307872 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3307872 ']' 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.844 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:21.103 [2024-12-16 05:44:54.732920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:21.103 [2024-12-16 05:44:54.732964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.103 [2024-12-16 05:44:54.793290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.103 [2024-12-16 05:44:54.832610] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.103 [2024-12-16 05:44:54.832651] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.103 [2024-12-16 05:44:54.832659] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.103 [2024-12-16 05:44:54.832665] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.103 [2024-12-16 05:44:54.832670] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.103 [2024-12-16 05:44:54.832717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.103 [2024-12-16 05:44:54.832798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.103 [2024-12-16 05:44:54.832901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.103 [2024-12-16 05:44:54.832903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.103 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.103 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:21.103 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:21.103 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.103 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:21.364 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.364 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:21.364 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:21.364 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:21.364 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:21.364 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:21.364 "nvmf_tgt_1" 00:16:21.364 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:21.623 "nvmf_tgt_2" 00:16:21.623 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:21.623 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:21.623 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:21.623 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:21.881 true 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:21.881 true 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:21.881 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.140 rmmod nvme_tcp 00:16:22.140 rmmod nvme_fabrics 00:16:22.140 rmmod nvme_keyring 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 3307872 ']' 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 3307872 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3307872 ']' 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3307872 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3307872 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3307872' 00:16:22.140 killing process with pid 3307872 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3307872 00:16:22.140 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3307872 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.400 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.305 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:24.305 00:16:24.305 real 0m9.113s 00:16:24.305 user 0m7.025s 00:16:24.305 sys 0m4.523s 00:16:24.305 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.305 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:24.305 ************************************ 00:16:24.305 END TEST nvmf_multitarget 00:16:24.305 ************************************ 00:16:24.305 05:44:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:24.305 05:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:24.305 05:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.305 05:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.564 ************************************ 00:16:24.564 START TEST nvmf_rpc 00:16:24.564 ************************************ 00:16:24.564 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:24.564 * Looking for test storage... 00:16:24.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.564 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:24.564 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:24.564 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.565 --rc genhtml_branch_coverage=1 00:16:24.565 --rc genhtml_function_coverage=1 00:16:24.565 --rc genhtml_legend=1 00:16:24.565 --rc geninfo_all_blocks=1 00:16:24.565 --rc geninfo_unexecuted_blocks=1 00:16:24.565 00:16:24.565 ' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.565 --rc genhtml_branch_coverage=1 00:16:24.565 --rc genhtml_function_coverage=1 00:16:24.565 --rc genhtml_legend=1 00:16:24.565 --rc geninfo_all_blocks=1 00:16:24.565 --rc geninfo_unexecuted_blocks=1 00:16:24.565 00:16:24.565 ' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.565 --rc genhtml_branch_coverage=1 00:16:24.565 --rc genhtml_function_coverage=1 00:16:24.565 --rc genhtml_legend=1 00:16:24.565 --rc geninfo_all_blocks=1 00:16:24.565 --rc geninfo_unexecuted_blocks=1 00:16:24.565 00:16:24.565 ' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.565 --rc genhtml_branch_coverage=1 00:16:24.565 --rc genhtml_function_coverage=1 00:16:24.565 --rc genhtml_legend=1 00:16:24.565 --rc geninfo_all_blocks=1 00:16:24.565 --rc geninfo_unexecuted_blocks=1 00:16:24.565 00:16:24.565 ' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:24.565 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:24.566 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:24.566 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:29.836 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:29.836 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:29.836 Found net devices under 0000:af:00.0: cvl_0_0 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:29.836 Found net devices under 0000:af:00.1: cvl_0_1 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:29.836 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:16:30.095 00:16:30.095 --- 10.0.0.2 ping statistics --- 00:16:30.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.095 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:16:30.095 00:16:30.095 --- 10.0.0.1 ping statistics --- 00:16:30.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.095 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:30.095 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=3311563 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 3311563 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3311563 ']' 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.096 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.096 [2024-12-16 05:45:03.852438] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:30.096 [2024-12-16 05:45:03.852481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.096 [2024-12-16 05:45:03.907170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.096 [2024-12-16 05:45:03.946364] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.096 [2024-12-16 05:45:03.946403] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.096 [2024-12-16 05:45:03.946411] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.096 [2024-12-16 05:45:03.946416] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.096 [2024-12-16 05:45:03.946421] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.096 [2024-12-16 05:45:03.946467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.096 [2024-12-16 05:45:03.946557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.096 [2024-12-16 05:45:03.946648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.096 [2024-12-16 05:45:03.946649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.354 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:30.354 "tick_rate": 2100000000, 00:16:30.354 "poll_groups": [ 00:16:30.354 { 00:16:30.354 "name": "nvmf_tgt_poll_group_000", 00:16:30.354 "admin_qpairs": 0, 00:16:30.354 "io_qpairs": 0, 00:16:30.354 "current_admin_qpairs": 0, 00:16:30.354 "current_io_qpairs": 0, 00:16:30.354 "pending_bdev_io": 0, 00:16:30.354 "completed_nvme_io": 0, 00:16:30.354 "transports": [] 00:16:30.354 }, 00:16:30.354 { 00:16:30.354 "name": "nvmf_tgt_poll_group_001", 00:16:30.354 "admin_qpairs": 0, 00:16:30.354 "io_qpairs": 0, 00:16:30.354 "current_admin_qpairs": 0, 00:16:30.354 "current_io_qpairs": 0, 00:16:30.354 "pending_bdev_io": 0, 00:16:30.354 "completed_nvme_io": 0, 00:16:30.354 "transports": [] 00:16:30.354 }, 00:16:30.354 { 00:16:30.354 "name": "nvmf_tgt_poll_group_002", 00:16:30.354 "admin_qpairs": 0, 00:16:30.354 "io_qpairs": 0, 00:16:30.354 "current_admin_qpairs": 0, 00:16:30.354 "current_io_qpairs": 0, 00:16:30.354 "pending_bdev_io": 0, 00:16:30.354 "completed_nvme_io": 0, 00:16:30.354 "transports": [] 00:16:30.354 }, 00:16:30.354 { 00:16:30.355 "name": "nvmf_tgt_poll_group_003", 00:16:30.355 "admin_qpairs": 0, 00:16:30.355 "io_qpairs": 0, 00:16:30.355 "current_admin_qpairs": 0, 00:16:30.355 "current_io_qpairs": 0, 00:16:30.355 "pending_bdev_io": 0, 00:16:30.355 "completed_nvme_io": 0, 00:16:30.355 "transports": [] 00:16:30.355 } 00:16:30.355 ] 00:16:30.355 }' 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.355 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.355 [2024-12-16 05:45:04.204433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:30.614 "tick_rate": 2100000000, 00:16:30.614 "poll_groups": [ 00:16:30.614 { 00:16:30.614 "name": "nvmf_tgt_poll_group_000", 00:16:30.614 "admin_qpairs": 0, 00:16:30.614 "io_qpairs": 0, 00:16:30.614 "current_admin_qpairs": 0, 00:16:30.614 "current_io_qpairs": 0, 00:16:30.614 "pending_bdev_io": 0, 00:16:30.614 "completed_nvme_io": 0, 00:16:30.614 "transports": [ 00:16:30.614 { 00:16:30.614 "trtype": "TCP" 00:16:30.614 } 00:16:30.614 ] 00:16:30.614 }, 00:16:30.614 { 00:16:30.614 "name": "nvmf_tgt_poll_group_001", 00:16:30.614 "admin_qpairs": 0, 00:16:30.614 "io_qpairs": 0, 00:16:30.614 "current_admin_qpairs": 0, 00:16:30.614 "current_io_qpairs": 0, 00:16:30.614 "pending_bdev_io": 0, 00:16:30.614 "completed_nvme_io": 0, 00:16:30.614 "transports": [ 00:16:30.614 { 00:16:30.614 "trtype": "TCP" 00:16:30.614 } 00:16:30.614 ] 00:16:30.614 }, 00:16:30.614 { 00:16:30.614 "name": "nvmf_tgt_poll_group_002", 00:16:30.614 "admin_qpairs": 0, 00:16:30.614 "io_qpairs": 0, 00:16:30.614 "current_admin_qpairs": 0, 00:16:30.614 "current_io_qpairs": 0, 00:16:30.614 "pending_bdev_io": 0, 00:16:30.614 "completed_nvme_io": 0, 00:16:30.614 "transports": [ 00:16:30.614 { 00:16:30.614 "trtype": "TCP" 00:16:30.614 } 00:16:30.614 ] 00:16:30.614 }, 00:16:30.614 { 00:16:30.614 "name": "nvmf_tgt_poll_group_003", 00:16:30.614 "admin_qpairs": 0, 00:16:30.614 "io_qpairs": 0, 00:16:30.614 "current_admin_qpairs": 0, 00:16:30.614 "current_io_qpairs": 0, 00:16:30.614 "pending_bdev_io": 0, 00:16:30.614 "completed_nvme_io": 0, 00:16:30.614 "transports": [ 00:16:30.614 { 00:16:30.614 "trtype": "TCP" 00:16:30.614 } 00:16:30.614 ] 00:16:30.614 } 00:16:30.614 ] 00:16:30.614 }' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.614 Malloc1 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:30.614 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.615 [2024-12-16 05:45:04.376024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:30.615 [2024-12-16 05:45:04.410730] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:30.615 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:30.615 could not add new controller: failed to write to nvme-fabrics device 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.615 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.992 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.992 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:31.992 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.992 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:31.992 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:33.896 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.155 [2024-12-16 05:45:07.783677] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:34.155 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:34.155 could not add new controller: failed to write to nvme-fabrics device 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.155 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:35.091 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:35.091 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.091 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.091 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:35.091 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:37.624 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:37.624 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:37.624 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:37.624 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:37.624 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.624 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:37.625 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 [2024-12-16 05:45:11.066427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.625 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.562 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.562 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.562 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.562 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:38.562 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:40.464 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.723 [2024-12-16 05:45:14.356530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.723 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.658 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.658 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:41.658 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.658 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:41.658 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 [2024-12-16 05:45:17.645962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.191 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:45.127 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:45.127 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:45.127 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.127 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:45.127 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:47.031 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:47.031 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:47.031 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.031 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:47.031 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.031 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:47.031 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.290 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:47.290 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:47.290 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:47.290 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.290 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:47.290 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 [2024-12-16 05:45:21.046395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.290 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.668 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.668 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:48.668 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.668 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:48.668 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.694 [2024-12-16 05:45:24.345830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.694 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.695 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:50.695 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.695 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.695 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.695 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.631 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.631 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:51.631 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.631 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:51.631 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 [2024-12-16 05:45:27.660127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 [2024-12-16 05:45:27.708166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.168 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 [2024-12-16 05:45:27.756312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 [2024-12-16 05:45:27.804493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 [2024-12-16 05:45:27.852656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:54.169 "tick_rate": 2100000000, 00:16:54.169 "poll_groups": [ 00:16:54.169 { 00:16:54.169 "name": "nvmf_tgt_poll_group_000", 00:16:54.169 "admin_qpairs": 2, 00:16:54.169 "io_qpairs": 168, 00:16:54.169 "current_admin_qpairs": 0, 00:16:54.169 "current_io_qpairs": 0, 00:16:54.169 "pending_bdev_io": 0, 00:16:54.169 "completed_nvme_io": 316, 00:16:54.169 "transports": [ 00:16:54.169 { 00:16:54.169 "trtype": "TCP" 00:16:54.169 } 00:16:54.169 ] 00:16:54.169 }, 00:16:54.169 { 00:16:54.169 "name": "nvmf_tgt_poll_group_001", 00:16:54.169 "admin_qpairs": 2, 00:16:54.169 "io_qpairs": 168, 00:16:54.169 "current_admin_qpairs": 0, 00:16:54.169 "current_io_qpairs": 0, 00:16:54.169 "pending_bdev_io": 0, 00:16:54.169 "completed_nvme_io": 268, 00:16:54.169 "transports": [ 00:16:54.169 { 00:16:54.169 "trtype": "TCP" 00:16:54.169 } 00:16:54.169 ] 00:16:54.169 }, 00:16:54.169 { 00:16:54.169 "name": "nvmf_tgt_poll_group_002", 00:16:54.169 "admin_qpairs": 1, 00:16:54.169 "io_qpairs": 168, 00:16:54.169 "current_admin_qpairs": 0, 00:16:54.169 "current_io_qpairs": 0, 00:16:54.169 "pending_bdev_io": 0, 00:16:54.169 "completed_nvme_io": 219, 00:16:54.169 "transports": [ 00:16:54.169 { 00:16:54.169 "trtype": "TCP" 00:16:54.169 } 00:16:54.169 ] 00:16:54.169 }, 00:16:54.169 { 00:16:54.169 "name": "nvmf_tgt_poll_group_003", 00:16:54.169 "admin_qpairs": 2, 00:16:54.169 "io_qpairs": 168, 00:16:54.169 "current_admin_qpairs": 0, 00:16:54.169 "current_io_qpairs": 0, 00:16:54.169 "pending_bdev_io": 0, 00:16:54.169 "completed_nvme_io": 219, 00:16:54.169 "transports": [ 00:16:54.169 { 00:16:54.169 "trtype": "TCP" 00:16:54.169 } 00:16:54.169 ] 00:16:54.169 } 00:16:54.169 ] 00:16:54.169 }' 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:54.169 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:54.170 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:54.170 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:54.170 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:54.170 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:54.170 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.170 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.170 rmmod nvme_tcp 00:16:54.429 rmmod nvme_fabrics 00:16:54.429 rmmod nvme_keyring 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 3311563 ']' 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 3311563 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3311563 ']' 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3311563 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3311563 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3311563' 00:16:54.429 killing process with pid 3311563 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3311563 00:16:54.429 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3311563 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.688 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.593 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.593 00:16:56.593 real 0m32.210s 00:16:56.593 user 1m38.597s 00:16:56.593 sys 0m6.056s 00:16:56.593 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:56.593 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.593 ************************************ 00:16:56.593 END TEST nvmf_rpc 00:16:56.593 ************************************ 00:16:56.593 05:45:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:56.593 05:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:56.593 05:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.593 05:45:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.853 ************************************ 00:16:56.853 START TEST nvmf_invalid 00:16:56.854 ************************************ 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:56.854 * Looking for test storage... 00:16:56.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:56.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.854 --rc genhtml_branch_coverage=1 00:16:56.854 --rc genhtml_function_coverage=1 00:16:56.854 --rc genhtml_legend=1 00:16:56.854 --rc geninfo_all_blocks=1 00:16:56.854 --rc geninfo_unexecuted_blocks=1 00:16:56.854 00:16:56.854 ' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:56.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.854 --rc genhtml_branch_coverage=1 00:16:56.854 --rc genhtml_function_coverage=1 00:16:56.854 --rc genhtml_legend=1 00:16:56.854 --rc geninfo_all_blocks=1 00:16:56.854 --rc geninfo_unexecuted_blocks=1 00:16:56.854 00:16:56.854 ' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:56.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.854 --rc genhtml_branch_coverage=1 00:16:56.854 --rc genhtml_function_coverage=1 00:16:56.854 --rc genhtml_legend=1 00:16:56.854 --rc geninfo_all_blocks=1 00:16:56.854 --rc geninfo_unexecuted_blocks=1 00:16:56.854 00:16:56.854 ' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:56.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.854 --rc genhtml_branch_coverage=1 00:16:56.854 --rc genhtml_function_coverage=1 00:16:56.854 --rc genhtml_legend=1 00:16:56.854 --rc geninfo_all_blocks=1 00:16:56.854 --rc geninfo_unexecuted_blocks=1 00:16:56.854 00:16:56.854 ' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:56.854 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.855 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:02.130 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:02.130 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:02.130 Found net devices under 0000:af:00.0: cvl_0_0 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:02.130 Found net devices under 0000:af:00.1: cvl_0_1 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.130 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:17:02.390 00:17:02.390 --- 10.0.0.2 ping statistics --- 00:17:02.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.390 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:17:02.390 00:17:02.390 --- 10.0.0.1 ping statistics --- 00:17:02.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.390 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:02.390 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=3319598 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 3319598 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3319598 ']' 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.649 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.649 [2024-12-16 05:45:36.320902] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:02.649 [2024-12-16 05:45:36.320948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.649 [2024-12-16 05:45:36.381155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.649 [2024-12-16 05:45:36.423093] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.649 [2024-12-16 05:45:36.423132] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.649 [2024-12-16 05:45:36.423139] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.649 [2024-12-16 05:45:36.423145] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.649 [2024-12-16 05:45:36.423150] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.649 [2024-12-16 05:45:36.423192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.649 [2024-12-16 05:45:36.423269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.649 [2024-12-16 05:45:36.423362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.649 [2024-12-16 05:45:36.423363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:02.908 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26034 00:17:02.908 [2024-12-16 05:45:36.739322] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:03.167 { 00:17:03.167 "nqn": "nqn.2016-06.io.spdk:cnode26034", 00:17:03.167 "tgt_name": "foobar", 00:17:03.167 "method": "nvmf_create_subsystem", 00:17:03.167 "req_id": 1 00:17:03.167 } 00:17:03.167 Got JSON-RPC error response 00:17:03.167 response: 00:17:03.167 { 00:17:03.167 "code": -32603, 00:17:03.167 "message": "Unable to find target foobar" 00:17:03.167 }' 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:03.167 { 00:17:03.167 "nqn": "nqn.2016-06.io.spdk:cnode26034", 00:17:03.167 "tgt_name": "foobar", 00:17:03.167 "method": "nvmf_create_subsystem", 00:17:03.167 "req_id": 1 00:17:03.167 } 00:17:03.167 Got JSON-RPC error response 00:17:03.167 response: 00:17:03.167 { 00:17:03.167 "code": -32603, 00:17:03.167 "message": "Unable to find target foobar" 00:17:03.167 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22820 00:17:03.167 [2024-12-16 05:45:36.948054] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22820: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:03.167 { 00:17:03.167 "nqn": "nqn.2016-06.io.spdk:cnode22820", 00:17:03.167 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:03.167 "method": "nvmf_create_subsystem", 00:17:03.167 "req_id": 1 00:17:03.167 } 00:17:03.167 Got JSON-RPC error response 00:17:03.167 response: 00:17:03.167 { 00:17:03.167 "code": -32602, 00:17:03.167 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:03.167 }' 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:03.167 { 00:17:03.167 "nqn": "nqn.2016-06.io.spdk:cnode22820", 00:17:03.167 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:03.167 "method": "nvmf_create_subsystem", 00:17:03.167 "req_id": 1 00:17:03.167 } 00:17:03.167 Got JSON-RPC error response 00:17:03.167 response: 00:17:03.167 { 00:17:03.167 "code": -32602, 00:17:03.167 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:03.167 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:03.167 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25693 00:17:03.427 [2024-12-16 05:45:37.152717] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25693: invalid model number 'SPDK_Controller' 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:03.427 { 00:17:03.427 "nqn": "nqn.2016-06.io.spdk:cnode25693", 00:17:03.427 "model_number": "SPDK_Controller\u001f", 00:17:03.427 "method": "nvmf_create_subsystem", 00:17:03.427 "req_id": 1 00:17:03.427 } 00:17:03.427 Got JSON-RPC error response 00:17:03.427 response: 00:17:03.427 { 00:17:03.427 "code": -32602, 00:17:03.427 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.427 }' 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:03.427 { 00:17:03.427 "nqn": "nqn.2016-06.io.spdk:cnode25693", 00:17:03.427 "model_number": "SPDK_Controller\u001f", 00:17:03.427 "method": "nvmf_create_subsystem", 00:17:03.427 "req_id": 1 00:17:03.427 } 00:17:03.427 Got JSON-RPC error response 00:17:03.427 response: 00:17:03.427 { 00:17:03.427 "code": -32602, 00:17:03.427 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.427 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.427 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.428 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bqk5!H}))\Hylg";OSITV' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bqk5!H}))\Hylg";OSITV' nqn.2016-06.io.spdk:cnode3142 00:17:03.687 [2024-12-16 05:45:37.497898] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3142: invalid serial number 'bqk5!H}))\Hylg";OSITV' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:03.687 { 00:17:03.687 "nqn": "nqn.2016-06.io.spdk:cnode3142", 00:17:03.687 "serial_number": "bqk5!H}))\\Hylg\";OSITV", 00:17:03.687 "method": "nvmf_create_subsystem", 00:17:03.687 "req_id": 1 00:17:03.687 } 00:17:03.687 Got JSON-RPC error response 00:17:03.687 response: 00:17:03.687 { 00:17:03.687 "code": -32602, 00:17:03.687 "message": "Invalid SN bqk5!H}))\\Hylg\";OSITV" 00:17:03.687 }' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:03.687 { 00:17:03.687 "nqn": "nqn.2016-06.io.spdk:cnode3142", 00:17:03.687 "serial_number": "bqk5!H}))\\Hylg\";OSITV", 00:17:03.687 "method": "nvmf_create_subsystem", 00:17:03.687 "req_id": 1 00:17:03.687 } 00:17:03.687 Got JSON-RPC error response 00:17:03.687 response: 00:17:03.687 { 00:17:03.687 "code": -32602, 00:17:03.687 "message": "Invalid SN bqk5!H}))\\Hylg\";OSITV" 00:17:03.687 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.687 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:03.947 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:03.948 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:17:03.949 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q3~5#LK4N8D?aji[Tc^OTdA7D@X?Uaic/eku7^S%' 00:17:04.208 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Q3~5#LK4N8D?aji[Tc^OTdA7D@X?Uaic/eku7^S%' nqn.2016-06.io.spdk:cnode13328 00:17:04.208 [2024-12-16 05:45:37.971491] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13328: invalid model number 'Q3~5#LK4N8D?aji[Tc^OTdA7D@X?Uaic/eku7^S%' 00:17:04.208 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:04.208 { 00:17:04.208 "nqn": "nqn.2016-06.io.spdk:cnode13328", 00:17:04.208 "model_number": "Q3~5#LK4N8D?aji[\u007fTc^OTdA7D@X?Uaic/eku7^S%", 00:17:04.208 "method": "nvmf_create_subsystem", 00:17:04.208 "req_id": 1 00:17:04.208 } 00:17:04.208 Got JSON-RPC error response 00:17:04.208 response: 00:17:04.208 { 00:17:04.208 "code": -32602, 00:17:04.208 "message": "Invalid MN Q3~5#LK4N8D?aji[\u007fTc^OTdA7D@X?Uaic/eku7^S%" 00:17:04.208 }' 00:17:04.208 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:04.208 { 00:17:04.208 "nqn": "nqn.2016-06.io.spdk:cnode13328", 00:17:04.208 "model_number": "Q3~5#LK4N8D?aji[\u007fTc^OTdA7D@X?Uaic/eku7^S%", 00:17:04.208 "method": "nvmf_create_subsystem", 00:17:04.208 "req_id": 1 00:17:04.208 } 00:17:04.208 Got JSON-RPC error response 00:17:04.208 response: 00:17:04.208 { 00:17:04.208 "code": -32602, 00:17:04.208 "message": "Invalid MN Q3~5#LK4N8D?aji[\u007fTc^OTdA7D@X?Uaic/eku7^S%" 00:17:04.208 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:04.208 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:04.467 [2024-12-16 05:45:38.164193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.467 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:04.726 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:04.726 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:04.726 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:04.726 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:04.726 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:04.726 [2024-12-16 05:45:38.569539] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:04.984 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:04.984 { 00:17:04.984 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.984 "listen_address": { 00:17:04.984 "trtype": "tcp", 00:17:04.984 "traddr": "", 00:17:04.984 "trsvcid": "4421" 00:17:04.984 }, 00:17:04.984 "method": "nvmf_subsystem_remove_listener", 00:17:04.984 "req_id": 1 00:17:04.984 } 00:17:04.984 Got JSON-RPC error response 00:17:04.984 response: 00:17:04.984 { 00:17:04.984 "code": -32602, 00:17:04.984 "message": "Invalid parameters" 00:17:04.984 }' 00:17:04.985 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:04.985 { 00:17:04.985 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.985 "listen_address": { 00:17:04.985 "trtype": "tcp", 00:17:04.985 "traddr": "", 00:17:04.985 "trsvcid": "4421" 00:17:04.985 }, 00:17:04.985 "method": "nvmf_subsystem_remove_listener", 00:17:04.985 "req_id": 1 00:17:04.985 } 00:17:04.985 Got JSON-RPC error response 00:17:04.985 response: 00:17:04.985 { 00:17:04.985 "code": -32602, 00:17:04.985 "message": "Invalid parameters" 00:17:04.985 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:04.985 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6156 -i 0 00:17:04.985 [2024-12-16 05:45:38.786223] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6156: invalid cntlid range [0-65519] 00:17:04.985 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:04.985 { 00:17:04.985 "nqn": "nqn.2016-06.io.spdk:cnode6156", 00:17:04.985 "min_cntlid": 0, 00:17:04.985 "method": "nvmf_create_subsystem", 00:17:04.985 "req_id": 1 00:17:04.985 } 00:17:04.985 Got JSON-RPC error response 00:17:04.985 response: 00:17:04.985 { 00:17:04.985 "code": -32602, 00:17:04.985 "message": "Invalid cntlid range [0-65519]" 00:17:04.985 }' 00:17:04.985 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:04.985 { 00:17:04.985 "nqn": "nqn.2016-06.io.spdk:cnode6156", 00:17:04.985 "min_cntlid": 0, 00:17:04.985 "method": "nvmf_create_subsystem", 00:17:04.985 "req_id": 1 00:17:04.985 } 00:17:04.985 Got JSON-RPC error response 00:17:04.985 response: 00:17:04.985 { 00:17:04.985 "code": -32602, 00:17:04.985 "message": "Invalid cntlid range [0-65519]" 00:17:04.985 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.985 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27966 -i 65520 00:17:05.244 [2024-12-16 05:45:38.998969] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27966: invalid cntlid range [65520-65519] 00:17:05.244 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:05.244 { 00:17:05.244 "nqn": "nqn.2016-06.io.spdk:cnode27966", 00:17:05.244 "min_cntlid": 65520, 00:17:05.244 "method": "nvmf_create_subsystem", 00:17:05.244 "req_id": 1 00:17:05.244 } 00:17:05.244 Got JSON-RPC error response 00:17:05.244 response: 00:17:05.244 { 00:17:05.244 "code": -32602, 00:17:05.244 "message": "Invalid cntlid range [65520-65519]" 00:17:05.244 }' 00:17:05.244 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:05.244 { 00:17:05.244 "nqn": "nqn.2016-06.io.spdk:cnode27966", 00:17:05.244 "min_cntlid": 65520, 00:17:05.244 "method": "nvmf_create_subsystem", 00:17:05.244 "req_id": 1 00:17:05.244 } 00:17:05.244 Got JSON-RPC error response 00:17:05.244 response: 00:17:05.244 { 00:17:05.244 "code": -32602, 00:17:05.244 "message": "Invalid cntlid range [65520-65519]" 00:17:05.244 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.244 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17573 -I 0 00:17:05.503 [2024-12-16 05:45:39.195615] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17573: invalid cntlid range [1-0] 00:17:05.503 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:05.503 { 00:17:05.503 "nqn": "nqn.2016-06.io.spdk:cnode17573", 00:17:05.503 "max_cntlid": 0, 00:17:05.503 "method": "nvmf_create_subsystem", 00:17:05.503 "req_id": 1 00:17:05.503 } 00:17:05.503 Got JSON-RPC error response 00:17:05.503 response: 00:17:05.503 { 00:17:05.503 "code": -32602, 00:17:05.503 "message": "Invalid cntlid range [1-0]" 00:17:05.503 }' 00:17:05.503 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:05.503 { 00:17:05.503 "nqn": "nqn.2016-06.io.spdk:cnode17573", 00:17:05.503 "max_cntlid": 0, 00:17:05.503 "method": "nvmf_create_subsystem", 00:17:05.503 "req_id": 1 00:17:05.503 } 00:17:05.503 Got JSON-RPC error response 00:17:05.503 response: 00:17:05.503 { 00:17:05.503 "code": -32602, 00:17:05.503 "message": "Invalid cntlid range [1-0]" 00:17:05.503 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.503 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17318 -I 65520 00:17:05.762 [2024-12-16 05:45:39.392293] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17318: invalid cntlid range [1-65520] 00:17:05.762 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:05.762 { 00:17:05.762 "nqn": "nqn.2016-06.io.spdk:cnode17318", 00:17:05.762 "max_cntlid": 65520, 00:17:05.762 "method": "nvmf_create_subsystem", 00:17:05.762 "req_id": 1 00:17:05.762 } 00:17:05.762 Got JSON-RPC error response 00:17:05.762 response: 00:17:05.762 { 00:17:05.762 "code": -32602, 00:17:05.762 "message": "Invalid cntlid range [1-65520]" 00:17:05.763 }' 00:17:05.763 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:05.763 { 00:17:05.763 "nqn": "nqn.2016-06.io.spdk:cnode17318", 00:17:05.763 "max_cntlid": 65520, 00:17:05.763 "method": "nvmf_create_subsystem", 00:17:05.763 "req_id": 1 00:17:05.763 } 00:17:05.763 Got JSON-RPC error response 00:17:05.763 response: 00:17:05.763 { 00:17:05.763 "code": -32602, 00:17:05.763 "message": "Invalid cntlid range [1-65520]" 00:17:05.763 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.763 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30572 -i 6 -I 5 00:17:05.763 [2024-12-16 05:45:39.589005] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30572: invalid cntlid range [6-5] 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:06.021 { 00:17:06.021 "nqn": "nqn.2016-06.io.spdk:cnode30572", 00:17:06.021 "min_cntlid": 6, 00:17:06.021 "max_cntlid": 5, 00:17:06.021 "method": "nvmf_create_subsystem", 00:17:06.021 "req_id": 1 00:17:06.021 } 00:17:06.021 Got JSON-RPC error response 00:17:06.021 response: 00:17:06.021 { 00:17:06.021 "code": -32602, 00:17:06.021 "message": "Invalid cntlid range [6-5]" 00:17:06.021 }' 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:06.021 { 00:17:06.021 "nqn": "nqn.2016-06.io.spdk:cnode30572", 00:17:06.021 "min_cntlid": 6, 00:17:06.021 "max_cntlid": 5, 00:17:06.021 "method": "nvmf_create_subsystem", 00:17:06.021 "req_id": 1 00:17:06.021 } 00:17:06.021 Got JSON-RPC error response 00:17:06.021 response: 00:17:06.021 { 00:17:06.021 "code": -32602, 00:17:06.021 "message": "Invalid cntlid range [6-5]" 00:17:06.021 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:06.021 { 00:17:06.021 "name": "foobar", 00:17:06.021 "method": "nvmf_delete_target", 00:17:06.021 "req_id": 1 00:17:06.021 } 00:17:06.021 Got JSON-RPC error response 00:17:06.021 response: 00:17:06.021 { 00:17:06.021 "code": -32602, 00:17:06.021 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:06.021 }' 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:06.021 { 00:17:06.021 "name": "foobar", 00:17:06.021 "method": "nvmf_delete_target", 00:17:06.021 "req_id": 1 00:17:06.021 } 00:17:06.021 Got JSON-RPC error response 00:17:06.021 response: 00:17:06.021 { 00:17:06.021 "code": -32602, 00:17:06.021 "message": "The specified target doesn't exist, cannot delete it." 00:17:06.021 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.021 rmmod nvme_tcp 00:17:06.021 rmmod nvme_fabrics 00:17:06.021 rmmod nvme_keyring 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 3319598 ']' 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 3319598 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3319598 ']' 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3319598 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319598 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319598' 00:17:06.021 killing process with pid 3319598 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3319598 00:17:06.021 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3319598 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.280 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.816 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:08.816 00:17:08.816 real 0m11.667s 00:17:08.816 user 0m18.508s 00:17:08.816 sys 0m5.049s 00:17:08.816 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.816 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:08.816 ************************************ 00:17:08.816 END TEST nvmf_invalid 00:17:08.816 ************************************ 00:17:08.816 05:45:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:08.816 05:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:08.816 05:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.817 ************************************ 00:17:08.817 START TEST nvmf_connect_stress 00:17:08.817 ************************************ 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:08.817 * Looking for test storage... 00:17:08.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:08.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.817 --rc genhtml_branch_coverage=1 00:17:08.817 --rc genhtml_function_coverage=1 00:17:08.817 --rc genhtml_legend=1 00:17:08.817 --rc geninfo_all_blocks=1 00:17:08.817 --rc geninfo_unexecuted_blocks=1 00:17:08.817 00:17:08.817 ' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:08.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.817 --rc genhtml_branch_coverage=1 00:17:08.817 --rc genhtml_function_coverage=1 00:17:08.817 --rc genhtml_legend=1 00:17:08.817 --rc geninfo_all_blocks=1 00:17:08.817 --rc geninfo_unexecuted_blocks=1 00:17:08.817 00:17:08.817 ' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:08.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.817 --rc genhtml_branch_coverage=1 00:17:08.817 --rc genhtml_function_coverage=1 00:17:08.817 --rc genhtml_legend=1 00:17:08.817 --rc geninfo_all_blocks=1 00:17:08.817 --rc geninfo_unexecuted_blocks=1 00:17:08.817 00:17:08.817 ' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:08.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.817 --rc genhtml_branch_coverage=1 00:17:08.817 --rc genhtml_function_coverage=1 00:17:08.817 --rc genhtml_legend=1 00:17:08.817 --rc geninfo_all_blocks=1 00:17:08.817 --rc geninfo_unexecuted_blocks=1 00:17:08.817 00:17:08.817 ' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:08.817 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.818 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.107 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:14.108 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:14.108 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:14.108 Found net devices under 0000:af:00.0: cvl_0_0 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:14.108 Found net devices under 0000:af:00.1: cvl_0_1 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:17:14.108 00:17:14.108 --- 10.0.0.2 ping statistics --- 00:17:14.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.108 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:17:14.108 00:17:14.108 --- 10.0.0.1 ping statistics --- 00:17:14.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.108 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=3323692 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 3323692 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3323692 ']' 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.108 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.108 [2024-12-16 05:45:47.822428] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:14.109 [2024-12-16 05:45:47.822475] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.109 [2024-12-16 05:45:47.882498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.109 [2024-12-16 05:45:47.923307] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.109 [2024-12-16 05:45:47.923348] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.109 [2024-12-16 05:45:47.923357] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.109 [2024-12-16 05:45:47.923365] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.109 [2024-12-16 05:45:47.923371] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.109 [2024-12-16 05:45:47.923478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.109 [2024-12-16 05:45:47.923569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.109 [2024-12-16 05:45:47.923572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.368 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.369 [2024-12-16 05:45:48.057996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.369 [2024-12-16 05:45:48.086232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.369 NULL1 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3323714 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.369 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.937 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.937 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:14.937 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.937 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.937 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.196 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.196 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:15.196 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.196 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.196 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.456 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.456 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:15.456 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.456 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.456 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.715 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.715 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:15.715 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.715 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.715 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.974 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:15.974 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.974 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.974 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.542 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.542 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:16.542 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.542 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.542 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.802 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.802 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:16.802 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.802 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.802 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.060 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.060 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:17.060 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.060 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.060 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.319 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.319 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:17.319 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.319 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.319 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.887 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.887 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:17.887 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.887 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.887 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.145 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.145 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:18.145 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.145 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.145 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.403 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.403 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:18.403 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.403 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.403 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.661 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.661 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:18.661 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.661 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.661 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.920 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.920 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:18.920 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.920 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.920 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.486 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.486 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:19.486 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.486 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.486 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.745 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.745 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:19.745 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.745 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.745 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.004 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.004 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:20.004 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.004 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.004 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.262 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.263 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:20.263 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.263 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.263 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.521 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.521 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:20.521 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.521 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.521 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.088 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.088 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:21.088 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.088 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.088 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.347 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.347 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:21.347 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.347 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.347 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.605 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.605 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:21.605 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.605 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.605 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.864 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.864 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:21.864 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.864 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.864 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.431 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.431 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:22.431 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.431 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.431 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.690 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:22.690 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.690 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.949 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.949 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:22.949 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.949 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.949 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.208 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.208 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:23.208 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.208 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.208 05:45:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.467 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.467 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:23.467 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.467 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.467 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.034 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:24.034 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.034 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.035 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.293 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.293 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:24.293 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.293 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.293 05:45:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.552 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3323714 00:17:24.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3323714) - No such process 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3323714 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.552 rmmod nvme_tcp 00:17:24.552 rmmod nvme_fabrics 00:17:24.552 rmmod nvme_keyring 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 3323692 ']' 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 3323692 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3323692 ']' 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3323692 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.552 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3323692 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3323692' 00:17:24.812 killing process with pid 3323692 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3323692 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3323692 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.812 05:45:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.350 00:17:27.350 real 0m18.481s 00:17:27.350 user 0m39.041s 00:17:27.350 sys 0m8.285s 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.350 ************************************ 00:17:27.350 END TEST nvmf_connect_stress 00:17:27.350 ************************************ 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.350 ************************************ 00:17:27.350 START TEST nvmf_fused_ordering 00:17:27.350 ************************************ 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.350 * Looking for test storage... 00:17:27.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.350 --rc genhtml_branch_coverage=1 00:17:27.350 --rc genhtml_function_coverage=1 00:17:27.350 --rc genhtml_legend=1 00:17:27.350 --rc geninfo_all_blocks=1 00:17:27.350 --rc geninfo_unexecuted_blocks=1 00:17:27.350 00:17:27.350 ' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.350 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.351 05:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:32.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:32.623 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:32.623 Found net devices under 0000:af:00.0: cvl_0_0 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:32.623 Found net devices under 0000:af:00.1: cvl_0_1 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.623 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.624 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.624 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.624 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.624 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.624 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:17:32.883 00:17:32.883 --- 10.0.0.2 ping statistics --- 00:17:32.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.883 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:17:32.883 00:17:32.883 --- 10.0.0.1 ping statistics --- 00:17:32.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.883 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.883 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=3328968 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 3328968 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3328968 ']' 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.142 [2024-12-16 05:46:06.790102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:33.142 [2024-12-16 05:46:06.790154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.142 [2024-12-16 05:46:06.850806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.142 [2024-12-16 05:46:06.890480] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.142 [2024-12-16 05:46:06.890522] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.142 [2024-12-16 05:46:06.890530] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.142 [2024-12-16 05:46:06.890536] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.142 [2024-12-16 05:46:06.890542] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.142 [2024-12-16 05:46:06.890562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.142 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 [2024-12-16 05:46:07.019741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 [2024-12-16 05:46:07.035921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 NULL1 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:33.400 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.401 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.401 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.401 05:46:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:33.401 [2024-12-16 05:46:07.088080] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:33.401 [2024-12-16 05:46:07.088112] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329000 ] 00:17:33.659 Attached to nqn.2016-06.io.spdk:cnode1 00:17:33.660 Namespace ID: 1 size: 1GB 00:17:33.660 fused_ordering(0) 00:17:33.660 fused_ordering(1) 00:17:33.660 fused_ordering(2) 00:17:33.660 fused_ordering(3) 00:17:33.660 fused_ordering(4) 00:17:33.660 fused_ordering(5) 00:17:33.660 fused_ordering(6) 00:17:33.660 fused_ordering(7) 00:17:33.660 fused_ordering(8) 00:17:33.660 fused_ordering(9) 00:17:33.660 fused_ordering(10) 00:17:33.660 fused_ordering(11) 00:17:33.660 fused_ordering(12) 00:17:33.660 fused_ordering(13) 00:17:33.660 fused_ordering(14) 00:17:33.660 fused_ordering(15) 00:17:33.660 fused_ordering(16) 00:17:33.660 fused_ordering(17) 00:17:33.660 fused_ordering(18) 00:17:33.660 fused_ordering(19) 00:17:33.660 fused_ordering(20) 00:17:33.660 fused_ordering(21) 00:17:33.660 fused_ordering(22) 00:17:33.660 fused_ordering(23) 00:17:33.660 fused_ordering(24) 00:17:33.660 fused_ordering(25) 00:17:33.660 fused_ordering(26) 00:17:33.660 fused_ordering(27) 00:17:33.660 fused_ordering(28) 00:17:33.660 fused_ordering(29) 00:17:33.660 fused_ordering(30) 00:17:33.660 fused_ordering(31) 00:17:33.660 fused_ordering(32) 00:17:33.660 fused_ordering(33) 00:17:33.660 fused_ordering(34) 00:17:33.660 fused_ordering(35) 00:17:33.660 fused_ordering(36) 00:17:33.660 fused_ordering(37) 00:17:33.660 fused_ordering(38) 00:17:33.660 fused_ordering(39) 00:17:33.660 fused_ordering(40) 00:17:33.660 fused_ordering(41) 00:17:33.660 fused_ordering(42) 00:17:33.660 fused_ordering(43) 00:17:33.660 fused_ordering(44) 00:17:33.660 fused_ordering(45) 00:17:33.660 fused_ordering(46) 00:17:33.660 fused_ordering(47) 00:17:33.660 fused_ordering(48) 00:17:33.660 fused_ordering(49) 00:17:33.660 fused_ordering(50) 00:17:33.660 fused_ordering(51) 00:17:33.660 fused_ordering(52) 00:17:33.660 fused_ordering(53) 00:17:33.660 fused_ordering(54) 00:17:33.660 fused_ordering(55) 00:17:33.660 fused_ordering(56) 00:17:33.660 fused_ordering(57) 00:17:33.660 fused_ordering(58) 00:17:33.660 fused_ordering(59) 00:17:33.660 fused_ordering(60) 00:17:33.660 fused_ordering(61) 00:17:33.660 fused_ordering(62) 00:17:33.660 fused_ordering(63) 00:17:33.660 fused_ordering(64) 00:17:33.660 fused_ordering(65) 00:17:33.660 fused_ordering(66) 00:17:33.660 fused_ordering(67) 00:17:33.660 fused_ordering(68) 00:17:33.660 fused_ordering(69) 00:17:33.660 fused_ordering(70) 00:17:33.660 fused_ordering(71) 00:17:33.660 fused_ordering(72) 00:17:33.660 fused_ordering(73) 00:17:33.660 fused_ordering(74) 00:17:33.660 fused_ordering(75) 00:17:33.660 fused_ordering(76) 00:17:33.660 fused_ordering(77) 00:17:33.660 fused_ordering(78) 00:17:33.660 fused_ordering(79) 00:17:33.660 fused_ordering(80) 00:17:33.660 fused_ordering(81) 00:17:33.660 fused_ordering(82) 00:17:33.660 fused_ordering(83) 00:17:33.660 fused_ordering(84) 00:17:33.660 fused_ordering(85) 00:17:33.660 fused_ordering(86) 00:17:33.660 fused_ordering(87) 00:17:33.660 fused_ordering(88) 00:17:33.660 fused_ordering(89) 00:17:33.660 fused_ordering(90) 00:17:33.660 fused_ordering(91) 00:17:33.660 fused_ordering(92) 00:17:33.660 fused_ordering(93) 00:17:33.660 fused_ordering(94) 00:17:33.660 fused_ordering(95) 00:17:33.660 fused_ordering(96) 00:17:33.660 fused_ordering(97) 00:17:33.660 fused_ordering(98) 00:17:33.660 fused_ordering(99) 00:17:33.660 fused_ordering(100) 00:17:33.660 fused_ordering(101) 00:17:33.660 fused_ordering(102) 00:17:33.660 fused_ordering(103) 00:17:33.660 fused_ordering(104) 00:17:33.660 fused_ordering(105) 00:17:33.660 fused_ordering(106) 00:17:33.660 fused_ordering(107) 00:17:33.660 fused_ordering(108) 00:17:33.660 fused_ordering(109) 00:17:33.660 fused_ordering(110) 00:17:33.660 fused_ordering(111) 00:17:33.660 fused_ordering(112) 00:17:33.660 fused_ordering(113) 00:17:33.660 fused_ordering(114) 00:17:33.660 fused_ordering(115) 00:17:33.660 fused_ordering(116) 00:17:33.660 fused_ordering(117) 00:17:33.660 fused_ordering(118) 00:17:33.660 fused_ordering(119) 00:17:33.660 fused_ordering(120) 00:17:33.660 fused_ordering(121) 00:17:33.660 fused_ordering(122) 00:17:33.660 fused_ordering(123) 00:17:33.660 fused_ordering(124) 00:17:33.660 fused_ordering(125) 00:17:33.660 fused_ordering(126) 00:17:33.660 fused_ordering(127) 00:17:33.660 fused_ordering(128) 00:17:33.660 fused_ordering(129) 00:17:33.660 fused_ordering(130) 00:17:33.660 fused_ordering(131) 00:17:33.660 fused_ordering(132) 00:17:33.660 fused_ordering(133) 00:17:33.660 fused_ordering(134) 00:17:33.660 fused_ordering(135) 00:17:33.660 fused_ordering(136) 00:17:33.660 fused_ordering(137) 00:17:33.660 fused_ordering(138) 00:17:33.660 fused_ordering(139) 00:17:33.660 fused_ordering(140) 00:17:33.660 fused_ordering(141) 00:17:33.660 fused_ordering(142) 00:17:33.660 fused_ordering(143) 00:17:33.660 fused_ordering(144) 00:17:33.660 fused_ordering(145) 00:17:33.660 fused_ordering(146) 00:17:33.660 fused_ordering(147) 00:17:33.660 fused_ordering(148) 00:17:33.660 fused_ordering(149) 00:17:33.660 fused_ordering(150) 00:17:33.660 fused_ordering(151) 00:17:33.660 fused_ordering(152) 00:17:33.660 fused_ordering(153) 00:17:33.660 fused_ordering(154) 00:17:33.660 fused_ordering(155) 00:17:33.660 fused_ordering(156) 00:17:33.660 fused_ordering(157) 00:17:33.660 fused_ordering(158) 00:17:33.660 fused_ordering(159) 00:17:33.660 fused_ordering(160) 00:17:33.660 fused_ordering(161) 00:17:33.660 fused_ordering(162) 00:17:33.660 fused_ordering(163) 00:17:33.660 fused_ordering(164) 00:17:33.660 fused_ordering(165) 00:17:33.660 fused_ordering(166) 00:17:33.660 fused_ordering(167) 00:17:33.660 fused_ordering(168) 00:17:33.660 fused_ordering(169) 00:17:33.660 fused_ordering(170) 00:17:33.660 fused_ordering(171) 00:17:33.660 fused_ordering(172) 00:17:33.660 fused_ordering(173) 00:17:33.660 fused_ordering(174) 00:17:33.660 fused_ordering(175) 00:17:33.660 fused_ordering(176) 00:17:33.660 fused_ordering(177) 00:17:33.660 fused_ordering(178) 00:17:33.660 fused_ordering(179) 00:17:33.660 fused_ordering(180) 00:17:33.660 fused_ordering(181) 00:17:33.660 fused_ordering(182) 00:17:33.660 fused_ordering(183) 00:17:33.660 fused_ordering(184) 00:17:33.660 fused_ordering(185) 00:17:33.660 fused_ordering(186) 00:17:33.660 fused_ordering(187) 00:17:33.660 fused_ordering(188) 00:17:33.660 fused_ordering(189) 00:17:33.660 fused_ordering(190) 00:17:33.660 fused_ordering(191) 00:17:33.660 fused_ordering(192) 00:17:33.660 fused_ordering(193) 00:17:33.660 fused_ordering(194) 00:17:33.660 fused_ordering(195) 00:17:33.660 fused_ordering(196) 00:17:33.660 fused_ordering(197) 00:17:33.660 fused_ordering(198) 00:17:33.660 fused_ordering(199) 00:17:33.660 fused_ordering(200) 00:17:33.660 fused_ordering(201) 00:17:33.660 fused_ordering(202) 00:17:33.660 fused_ordering(203) 00:17:33.660 fused_ordering(204) 00:17:33.660 fused_ordering(205) 00:17:33.919 fused_ordering(206) 00:17:33.919 fused_ordering(207) 00:17:33.919 fused_ordering(208) 00:17:33.919 fused_ordering(209) 00:17:33.919 fused_ordering(210) 00:17:33.919 fused_ordering(211) 00:17:33.919 fused_ordering(212) 00:17:33.919 fused_ordering(213) 00:17:33.919 fused_ordering(214) 00:17:33.919 fused_ordering(215) 00:17:33.919 fused_ordering(216) 00:17:33.919 fused_ordering(217) 00:17:33.919 fused_ordering(218) 00:17:33.919 fused_ordering(219) 00:17:33.919 fused_ordering(220) 00:17:33.919 fused_ordering(221) 00:17:33.919 fused_ordering(222) 00:17:33.919 fused_ordering(223) 00:17:33.919 fused_ordering(224) 00:17:33.919 fused_ordering(225) 00:17:33.919 fused_ordering(226) 00:17:33.919 fused_ordering(227) 00:17:33.919 fused_ordering(228) 00:17:33.919 fused_ordering(229) 00:17:33.919 fused_ordering(230) 00:17:33.919 fused_ordering(231) 00:17:33.919 fused_ordering(232) 00:17:33.919 fused_ordering(233) 00:17:33.919 fused_ordering(234) 00:17:33.919 fused_ordering(235) 00:17:33.919 fused_ordering(236) 00:17:33.919 fused_ordering(237) 00:17:33.919 fused_ordering(238) 00:17:33.919 fused_ordering(239) 00:17:33.919 fused_ordering(240) 00:17:33.919 fused_ordering(241) 00:17:33.919 fused_ordering(242) 00:17:33.919 fused_ordering(243) 00:17:33.919 fused_ordering(244) 00:17:33.919 fused_ordering(245) 00:17:33.919 fused_ordering(246) 00:17:33.919 fused_ordering(247) 00:17:33.919 fused_ordering(248) 00:17:33.919 fused_ordering(249) 00:17:33.919 fused_ordering(250) 00:17:33.919 fused_ordering(251) 00:17:33.919 fused_ordering(252) 00:17:33.919 fused_ordering(253) 00:17:33.919 fused_ordering(254) 00:17:33.919 fused_ordering(255) 00:17:33.919 fused_ordering(256) 00:17:33.919 fused_ordering(257) 00:17:33.919 fused_ordering(258) 00:17:33.919 fused_ordering(259) 00:17:33.919 fused_ordering(260) 00:17:33.919 fused_ordering(261) 00:17:33.919 fused_ordering(262) 00:17:33.919 fused_ordering(263) 00:17:33.919 fused_ordering(264) 00:17:33.919 fused_ordering(265) 00:17:33.919 fused_ordering(266) 00:17:33.919 fused_ordering(267) 00:17:33.919 fused_ordering(268) 00:17:33.919 fused_ordering(269) 00:17:33.919 fused_ordering(270) 00:17:33.919 fused_ordering(271) 00:17:33.919 fused_ordering(272) 00:17:33.919 fused_ordering(273) 00:17:33.919 fused_ordering(274) 00:17:33.919 fused_ordering(275) 00:17:33.919 fused_ordering(276) 00:17:33.919 fused_ordering(277) 00:17:33.919 fused_ordering(278) 00:17:33.919 fused_ordering(279) 00:17:33.919 fused_ordering(280) 00:17:33.919 fused_ordering(281) 00:17:33.919 fused_ordering(282) 00:17:33.919 fused_ordering(283) 00:17:33.919 fused_ordering(284) 00:17:33.919 fused_ordering(285) 00:17:33.919 fused_ordering(286) 00:17:33.919 fused_ordering(287) 00:17:33.919 fused_ordering(288) 00:17:33.919 fused_ordering(289) 00:17:33.919 fused_ordering(290) 00:17:33.919 fused_ordering(291) 00:17:33.919 fused_ordering(292) 00:17:33.919 fused_ordering(293) 00:17:33.919 fused_ordering(294) 00:17:33.919 fused_ordering(295) 00:17:33.919 fused_ordering(296) 00:17:33.919 fused_ordering(297) 00:17:33.919 fused_ordering(298) 00:17:33.919 fused_ordering(299) 00:17:33.919 fused_ordering(300) 00:17:33.919 fused_ordering(301) 00:17:33.919 fused_ordering(302) 00:17:33.919 fused_ordering(303) 00:17:33.919 fused_ordering(304) 00:17:33.919 fused_ordering(305) 00:17:33.919 fused_ordering(306) 00:17:33.919 fused_ordering(307) 00:17:33.919 fused_ordering(308) 00:17:33.919 fused_ordering(309) 00:17:33.919 fused_ordering(310) 00:17:33.919 fused_ordering(311) 00:17:33.919 fused_ordering(312) 00:17:33.919 fused_ordering(313) 00:17:33.919 fused_ordering(314) 00:17:33.919 fused_ordering(315) 00:17:33.919 fused_ordering(316) 00:17:33.919 fused_ordering(317) 00:17:33.919 fused_ordering(318) 00:17:33.919 fused_ordering(319) 00:17:33.919 fused_ordering(320) 00:17:33.919 fused_ordering(321) 00:17:33.919 fused_ordering(322) 00:17:33.919 fused_ordering(323) 00:17:33.919 fused_ordering(324) 00:17:33.919 fused_ordering(325) 00:17:33.919 fused_ordering(326) 00:17:33.919 fused_ordering(327) 00:17:33.919 fused_ordering(328) 00:17:33.919 fused_ordering(329) 00:17:33.919 fused_ordering(330) 00:17:33.919 fused_ordering(331) 00:17:33.919 fused_ordering(332) 00:17:33.919 fused_ordering(333) 00:17:33.919 fused_ordering(334) 00:17:33.919 fused_ordering(335) 00:17:33.919 fused_ordering(336) 00:17:33.919 fused_ordering(337) 00:17:33.919 fused_ordering(338) 00:17:33.919 fused_ordering(339) 00:17:33.919 fused_ordering(340) 00:17:33.919 fused_ordering(341) 00:17:33.919 fused_ordering(342) 00:17:33.919 fused_ordering(343) 00:17:33.919 fused_ordering(344) 00:17:33.919 fused_ordering(345) 00:17:33.919 fused_ordering(346) 00:17:33.919 fused_ordering(347) 00:17:33.919 fused_ordering(348) 00:17:33.919 fused_ordering(349) 00:17:33.919 fused_ordering(350) 00:17:33.919 fused_ordering(351) 00:17:33.919 fused_ordering(352) 00:17:33.919 fused_ordering(353) 00:17:33.919 fused_ordering(354) 00:17:33.919 fused_ordering(355) 00:17:33.919 fused_ordering(356) 00:17:33.919 fused_ordering(357) 00:17:33.919 fused_ordering(358) 00:17:33.919 fused_ordering(359) 00:17:33.919 fused_ordering(360) 00:17:33.919 fused_ordering(361) 00:17:33.919 fused_ordering(362) 00:17:33.919 fused_ordering(363) 00:17:33.919 fused_ordering(364) 00:17:33.919 fused_ordering(365) 00:17:33.919 fused_ordering(366) 00:17:33.919 fused_ordering(367) 00:17:33.919 fused_ordering(368) 00:17:33.919 fused_ordering(369) 00:17:33.919 fused_ordering(370) 00:17:33.919 fused_ordering(371) 00:17:33.919 fused_ordering(372) 00:17:33.919 fused_ordering(373) 00:17:33.919 fused_ordering(374) 00:17:33.919 fused_ordering(375) 00:17:33.919 fused_ordering(376) 00:17:33.919 fused_ordering(377) 00:17:33.919 fused_ordering(378) 00:17:33.919 fused_ordering(379) 00:17:33.919 fused_ordering(380) 00:17:33.919 fused_ordering(381) 00:17:33.919 fused_ordering(382) 00:17:33.919 fused_ordering(383) 00:17:33.919 fused_ordering(384) 00:17:33.919 fused_ordering(385) 00:17:33.919 fused_ordering(386) 00:17:33.919 fused_ordering(387) 00:17:33.919 fused_ordering(388) 00:17:33.919 fused_ordering(389) 00:17:33.919 fused_ordering(390) 00:17:33.920 fused_ordering(391) 00:17:33.920 fused_ordering(392) 00:17:33.920 fused_ordering(393) 00:17:33.920 fused_ordering(394) 00:17:33.920 fused_ordering(395) 00:17:33.920 fused_ordering(396) 00:17:33.920 fused_ordering(397) 00:17:33.920 fused_ordering(398) 00:17:33.920 fused_ordering(399) 00:17:33.920 fused_ordering(400) 00:17:33.920 fused_ordering(401) 00:17:33.920 fused_ordering(402) 00:17:33.920 fused_ordering(403) 00:17:33.920 fused_ordering(404) 00:17:33.920 fused_ordering(405) 00:17:33.920 fused_ordering(406) 00:17:33.920 fused_ordering(407) 00:17:33.920 fused_ordering(408) 00:17:33.920 fused_ordering(409) 00:17:33.920 fused_ordering(410) 00:17:34.178 fused_ordering(411) 00:17:34.178 fused_ordering(412) 00:17:34.178 fused_ordering(413) 00:17:34.178 fused_ordering(414) 00:17:34.178 fused_ordering(415) 00:17:34.178 fused_ordering(416) 00:17:34.178 fused_ordering(417) 00:17:34.178 fused_ordering(418) 00:17:34.178 fused_ordering(419) 00:17:34.178 fused_ordering(420) 00:17:34.178 fused_ordering(421) 00:17:34.178 fused_ordering(422) 00:17:34.178 fused_ordering(423) 00:17:34.178 fused_ordering(424) 00:17:34.178 fused_ordering(425) 00:17:34.178 fused_ordering(426) 00:17:34.178 fused_ordering(427) 00:17:34.178 fused_ordering(428) 00:17:34.178 fused_ordering(429) 00:17:34.178 fused_ordering(430) 00:17:34.178 fused_ordering(431) 00:17:34.178 fused_ordering(432) 00:17:34.178 fused_ordering(433) 00:17:34.178 fused_ordering(434) 00:17:34.178 fused_ordering(435) 00:17:34.178 fused_ordering(436) 00:17:34.178 fused_ordering(437) 00:17:34.178 fused_ordering(438) 00:17:34.178 fused_ordering(439) 00:17:34.178 fused_ordering(440) 00:17:34.178 fused_ordering(441) 00:17:34.178 fused_ordering(442) 00:17:34.178 fused_ordering(443) 00:17:34.178 fused_ordering(444) 00:17:34.178 fused_ordering(445) 00:17:34.178 fused_ordering(446) 00:17:34.178 fused_ordering(447) 00:17:34.178 fused_ordering(448) 00:17:34.178 fused_ordering(449) 00:17:34.178 fused_ordering(450) 00:17:34.178 fused_ordering(451) 00:17:34.178 fused_ordering(452) 00:17:34.178 fused_ordering(453) 00:17:34.178 fused_ordering(454) 00:17:34.178 fused_ordering(455) 00:17:34.178 fused_ordering(456) 00:17:34.178 fused_ordering(457) 00:17:34.178 fused_ordering(458) 00:17:34.178 fused_ordering(459) 00:17:34.178 fused_ordering(460) 00:17:34.178 fused_ordering(461) 00:17:34.178 fused_ordering(462) 00:17:34.178 fused_ordering(463) 00:17:34.178 fused_ordering(464) 00:17:34.178 fused_ordering(465) 00:17:34.178 fused_ordering(466) 00:17:34.178 fused_ordering(467) 00:17:34.178 fused_ordering(468) 00:17:34.178 fused_ordering(469) 00:17:34.178 fused_ordering(470) 00:17:34.178 fused_ordering(471) 00:17:34.178 fused_ordering(472) 00:17:34.178 fused_ordering(473) 00:17:34.178 fused_ordering(474) 00:17:34.178 fused_ordering(475) 00:17:34.178 fused_ordering(476) 00:17:34.178 fused_ordering(477) 00:17:34.178 fused_ordering(478) 00:17:34.178 fused_ordering(479) 00:17:34.178 fused_ordering(480) 00:17:34.178 fused_ordering(481) 00:17:34.178 fused_ordering(482) 00:17:34.178 fused_ordering(483) 00:17:34.178 fused_ordering(484) 00:17:34.178 fused_ordering(485) 00:17:34.178 fused_ordering(486) 00:17:34.178 fused_ordering(487) 00:17:34.178 fused_ordering(488) 00:17:34.178 fused_ordering(489) 00:17:34.178 fused_ordering(490) 00:17:34.178 fused_ordering(491) 00:17:34.178 fused_ordering(492) 00:17:34.178 fused_ordering(493) 00:17:34.178 fused_ordering(494) 00:17:34.178 fused_ordering(495) 00:17:34.178 fused_ordering(496) 00:17:34.178 fused_ordering(497) 00:17:34.178 fused_ordering(498) 00:17:34.178 fused_ordering(499) 00:17:34.178 fused_ordering(500) 00:17:34.178 fused_ordering(501) 00:17:34.178 fused_ordering(502) 00:17:34.178 fused_ordering(503) 00:17:34.178 fused_ordering(504) 00:17:34.178 fused_ordering(505) 00:17:34.178 fused_ordering(506) 00:17:34.178 fused_ordering(507) 00:17:34.178 fused_ordering(508) 00:17:34.178 fused_ordering(509) 00:17:34.178 fused_ordering(510) 00:17:34.178 fused_ordering(511) 00:17:34.178 fused_ordering(512) 00:17:34.178 fused_ordering(513) 00:17:34.178 fused_ordering(514) 00:17:34.178 fused_ordering(515) 00:17:34.178 fused_ordering(516) 00:17:34.178 fused_ordering(517) 00:17:34.178 fused_ordering(518) 00:17:34.178 fused_ordering(519) 00:17:34.178 fused_ordering(520) 00:17:34.178 fused_ordering(521) 00:17:34.178 fused_ordering(522) 00:17:34.178 fused_ordering(523) 00:17:34.178 fused_ordering(524) 00:17:34.178 fused_ordering(525) 00:17:34.178 fused_ordering(526) 00:17:34.178 fused_ordering(527) 00:17:34.178 fused_ordering(528) 00:17:34.178 fused_ordering(529) 00:17:34.178 fused_ordering(530) 00:17:34.178 fused_ordering(531) 00:17:34.178 fused_ordering(532) 00:17:34.178 fused_ordering(533) 00:17:34.178 fused_ordering(534) 00:17:34.178 fused_ordering(535) 00:17:34.178 fused_ordering(536) 00:17:34.178 fused_ordering(537) 00:17:34.178 fused_ordering(538) 00:17:34.178 fused_ordering(539) 00:17:34.178 fused_ordering(540) 00:17:34.178 fused_ordering(541) 00:17:34.178 fused_ordering(542) 00:17:34.178 fused_ordering(543) 00:17:34.178 fused_ordering(544) 00:17:34.178 fused_ordering(545) 00:17:34.178 fused_ordering(546) 00:17:34.178 fused_ordering(547) 00:17:34.178 fused_ordering(548) 00:17:34.178 fused_ordering(549) 00:17:34.178 fused_ordering(550) 00:17:34.178 fused_ordering(551) 00:17:34.178 fused_ordering(552) 00:17:34.178 fused_ordering(553) 00:17:34.178 fused_ordering(554) 00:17:34.178 fused_ordering(555) 00:17:34.178 fused_ordering(556) 00:17:34.178 fused_ordering(557) 00:17:34.178 fused_ordering(558) 00:17:34.178 fused_ordering(559) 00:17:34.178 fused_ordering(560) 00:17:34.178 fused_ordering(561) 00:17:34.178 fused_ordering(562) 00:17:34.178 fused_ordering(563) 00:17:34.178 fused_ordering(564) 00:17:34.178 fused_ordering(565) 00:17:34.178 fused_ordering(566) 00:17:34.178 fused_ordering(567) 00:17:34.178 fused_ordering(568) 00:17:34.178 fused_ordering(569) 00:17:34.178 fused_ordering(570) 00:17:34.178 fused_ordering(571) 00:17:34.178 fused_ordering(572) 00:17:34.178 fused_ordering(573) 00:17:34.178 fused_ordering(574) 00:17:34.178 fused_ordering(575) 00:17:34.178 fused_ordering(576) 00:17:34.178 fused_ordering(577) 00:17:34.178 fused_ordering(578) 00:17:34.178 fused_ordering(579) 00:17:34.178 fused_ordering(580) 00:17:34.178 fused_ordering(581) 00:17:34.178 fused_ordering(582) 00:17:34.178 fused_ordering(583) 00:17:34.178 fused_ordering(584) 00:17:34.178 fused_ordering(585) 00:17:34.178 fused_ordering(586) 00:17:34.178 fused_ordering(587) 00:17:34.178 fused_ordering(588) 00:17:34.178 fused_ordering(589) 00:17:34.178 fused_ordering(590) 00:17:34.178 fused_ordering(591) 00:17:34.178 fused_ordering(592) 00:17:34.178 fused_ordering(593) 00:17:34.179 fused_ordering(594) 00:17:34.179 fused_ordering(595) 00:17:34.179 fused_ordering(596) 00:17:34.179 fused_ordering(597) 00:17:34.179 fused_ordering(598) 00:17:34.179 fused_ordering(599) 00:17:34.179 fused_ordering(600) 00:17:34.179 fused_ordering(601) 00:17:34.179 fused_ordering(602) 00:17:34.179 fused_ordering(603) 00:17:34.179 fused_ordering(604) 00:17:34.179 fused_ordering(605) 00:17:34.179 fused_ordering(606) 00:17:34.179 fused_ordering(607) 00:17:34.179 fused_ordering(608) 00:17:34.179 fused_ordering(609) 00:17:34.179 fused_ordering(610) 00:17:34.179 fused_ordering(611) 00:17:34.179 fused_ordering(612) 00:17:34.179 fused_ordering(613) 00:17:34.179 fused_ordering(614) 00:17:34.179 fused_ordering(615) 00:17:34.744 fused_ordering(616) 00:17:34.745 fused_ordering(617) 00:17:34.745 fused_ordering(618) 00:17:34.745 fused_ordering(619) 00:17:34.745 fused_ordering(620) 00:17:34.745 fused_ordering(621) 00:17:34.745 fused_ordering(622) 00:17:34.745 fused_ordering(623) 00:17:34.745 fused_ordering(624) 00:17:34.745 fused_ordering(625) 00:17:34.745 fused_ordering(626) 00:17:34.745 fused_ordering(627) 00:17:34.745 fused_ordering(628) 00:17:34.745 fused_ordering(629) 00:17:34.745 fused_ordering(630) 00:17:34.745 fused_ordering(631) 00:17:34.745 fused_ordering(632) 00:17:34.745 fused_ordering(633) 00:17:34.745 fused_ordering(634) 00:17:34.745 fused_ordering(635) 00:17:34.745 fused_ordering(636) 00:17:34.745 fused_ordering(637) 00:17:34.745 fused_ordering(638) 00:17:34.745 fused_ordering(639) 00:17:34.745 fused_ordering(640) 00:17:34.745 fused_ordering(641) 00:17:34.745 fused_ordering(642) 00:17:34.745 fused_ordering(643) 00:17:34.745 fused_ordering(644) 00:17:34.745 fused_ordering(645) 00:17:34.745 fused_ordering(646) 00:17:34.745 fused_ordering(647) 00:17:34.745 fused_ordering(648) 00:17:34.745 fused_ordering(649) 00:17:34.745 fused_ordering(650) 00:17:34.745 fused_ordering(651) 00:17:34.745 fused_ordering(652) 00:17:34.745 fused_ordering(653) 00:17:34.745 fused_ordering(654) 00:17:34.745 fused_ordering(655) 00:17:34.745 fused_ordering(656) 00:17:34.745 fused_ordering(657) 00:17:34.745 fused_ordering(658) 00:17:34.745 fused_ordering(659) 00:17:34.745 fused_ordering(660) 00:17:34.745 fused_ordering(661) 00:17:34.745 fused_ordering(662) 00:17:34.745 fused_ordering(663) 00:17:34.745 fused_ordering(664) 00:17:34.745 fused_ordering(665) 00:17:34.745 fused_ordering(666) 00:17:34.745 fused_ordering(667) 00:17:34.745 fused_ordering(668) 00:17:34.745 fused_ordering(669) 00:17:34.745 fused_ordering(670) 00:17:34.745 fused_ordering(671) 00:17:34.745 fused_ordering(672) 00:17:34.745 fused_ordering(673) 00:17:34.745 fused_ordering(674) 00:17:34.745 fused_ordering(675) 00:17:34.745 fused_ordering(676) 00:17:34.745 fused_ordering(677) 00:17:34.745 fused_ordering(678) 00:17:34.745 fused_ordering(679) 00:17:34.745 fused_ordering(680) 00:17:34.745 fused_ordering(681) 00:17:34.745 fused_ordering(682) 00:17:34.745 fused_ordering(683) 00:17:34.745 fused_ordering(684) 00:17:34.745 fused_ordering(685) 00:17:34.745 fused_ordering(686) 00:17:34.745 fused_ordering(687) 00:17:34.745 fused_ordering(688) 00:17:34.745 fused_ordering(689) 00:17:34.745 fused_ordering(690) 00:17:34.745 fused_ordering(691) 00:17:34.745 fused_ordering(692) 00:17:34.745 fused_ordering(693) 00:17:34.745 fused_ordering(694) 00:17:34.745 fused_ordering(695) 00:17:34.745 fused_ordering(696) 00:17:34.745 fused_ordering(697) 00:17:34.745 fused_ordering(698) 00:17:34.745 fused_ordering(699) 00:17:34.745 fused_ordering(700) 00:17:34.745 fused_ordering(701) 00:17:34.745 fused_ordering(702) 00:17:34.745 fused_ordering(703) 00:17:34.745 fused_ordering(704) 00:17:34.745 fused_ordering(705) 00:17:34.745 fused_ordering(706) 00:17:34.745 fused_ordering(707) 00:17:34.745 fused_ordering(708) 00:17:34.745 fused_ordering(709) 00:17:34.745 fused_ordering(710) 00:17:34.745 fused_ordering(711) 00:17:34.745 fused_ordering(712) 00:17:34.745 fused_ordering(713) 00:17:34.745 fused_ordering(714) 00:17:34.745 fused_ordering(715) 00:17:34.745 fused_ordering(716) 00:17:34.745 fused_ordering(717) 00:17:34.745 fused_ordering(718) 00:17:34.745 fused_ordering(719) 00:17:34.745 fused_ordering(720) 00:17:34.745 fused_ordering(721) 00:17:34.745 fused_ordering(722) 00:17:34.745 fused_ordering(723) 00:17:34.745 fused_ordering(724) 00:17:34.745 fused_ordering(725) 00:17:34.745 fused_ordering(726) 00:17:34.745 fused_ordering(727) 00:17:34.745 fused_ordering(728) 00:17:34.745 fused_ordering(729) 00:17:34.745 fused_ordering(730) 00:17:34.745 fused_ordering(731) 00:17:34.745 fused_ordering(732) 00:17:34.745 fused_ordering(733) 00:17:34.745 fused_ordering(734) 00:17:34.745 fused_ordering(735) 00:17:34.745 fused_ordering(736) 00:17:34.745 fused_ordering(737) 00:17:34.745 fused_ordering(738) 00:17:34.745 fused_ordering(739) 00:17:34.745 fused_ordering(740) 00:17:34.745 fused_ordering(741) 00:17:34.745 fused_ordering(742) 00:17:34.745 fused_ordering(743) 00:17:34.745 fused_ordering(744) 00:17:34.745 fused_ordering(745) 00:17:34.745 fused_ordering(746) 00:17:34.745 fused_ordering(747) 00:17:34.745 fused_ordering(748) 00:17:34.745 fused_ordering(749) 00:17:34.745 fused_ordering(750) 00:17:34.745 fused_ordering(751) 00:17:34.745 fused_ordering(752) 00:17:34.745 fused_ordering(753) 00:17:34.745 fused_ordering(754) 00:17:34.745 fused_ordering(755) 00:17:34.745 fused_ordering(756) 00:17:34.745 fused_ordering(757) 00:17:34.745 fused_ordering(758) 00:17:34.745 fused_ordering(759) 00:17:34.745 fused_ordering(760) 00:17:34.745 fused_ordering(761) 00:17:34.745 fused_ordering(762) 00:17:34.745 fused_ordering(763) 00:17:34.745 fused_ordering(764) 00:17:34.745 fused_ordering(765) 00:17:34.745 fused_ordering(766) 00:17:34.745 fused_ordering(767) 00:17:34.745 fused_ordering(768) 00:17:34.745 fused_ordering(769) 00:17:34.745 fused_ordering(770) 00:17:34.745 fused_ordering(771) 00:17:34.745 fused_ordering(772) 00:17:34.745 fused_ordering(773) 00:17:34.745 fused_ordering(774) 00:17:34.745 fused_ordering(775) 00:17:34.745 fused_ordering(776) 00:17:34.745 fused_ordering(777) 00:17:34.745 fused_ordering(778) 00:17:34.745 fused_ordering(779) 00:17:34.745 fused_ordering(780) 00:17:34.745 fused_ordering(781) 00:17:34.745 fused_ordering(782) 00:17:34.745 fused_ordering(783) 00:17:34.745 fused_ordering(784) 00:17:34.745 fused_ordering(785) 00:17:34.745 fused_ordering(786) 00:17:34.745 fused_ordering(787) 00:17:34.745 fused_ordering(788) 00:17:34.745 fused_ordering(789) 00:17:34.745 fused_ordering(790) 00:17:34.745 fused_ordering(791) 00:17:34.745 fused_ordering(792) 00:17:34.745 fused_ordering(793) 00:17:34.745 fused_ordering(794) 00:17:34.745 fused_ordering(795) 00:17:34.745 fused_ordering(796) 00:17:34.745 fused_ordering(797) 00:17:34.745 fused_ordering(798) 00:17:34.745 fused_ordering(799) 00:17:34.745 fused_ordering(800) 00:17:34.745 fused_ordering(801) 00:17:34.745 fused_ordering(802) 00:17:34.745 fused_ordering(803) 00:17:34.745 fused_ordering(804) 00:17:34.745 fused_ordering(805) 00:17:34.745 fused_ordering(806) 00:17:34.745 fused_ordering(807) 00:17:34.745 fused_ordering(808) 00:17:34.745 fused_ordering(809) 00:17:34.745 fused_ordering(810) 00:17:34.745 fused_ordering(811) 00:17:34.745 fused_ordering(812) 00:17:34.745 fused_ordering(813) 00:17:34.745 fused_ordering(814) 00:17:34.745 fused_ordering(815) 00:17:34.745 fused_ordering(816) 00:17:34.745 fused_ordering(817) 00:17:34.745 fused_ordering(818) 00:17:34.745 fused_ordering(819) 00:17:34.745 fused_ordering(820) 00:17:35.004 fused_o[2024-12-16 05:46:08.776530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440270 is same with the state(6) to be set 00:17:35.004 rdering(821) 00:17:35.004 fused_ordering(822) 00:17:35.004 fused_ordering(823) 00:17:35.004 fused_ordering(824) 00:17:35.004 fused_ordering(825) 00:17:35.004 fused_ordering(826) 00:17:35.004 fused_ordering(827) 00:17:35.004 fused_ordering(828) 00:17:35.004 fused_ordering(829) 00:17:35.004 fused_ordering(830) 00:17:35.004 fused_ordering(831) 00:17:35.004 fused_ordering(832) 00:17:35.004 fused_ordering(833) 00:17:35.004 fused_ordering(834) 00:17:35.004 fused_ordering(835) 00:17:35.004 fused_ordering(836) 00:17:35.004 fused_ordering(837) 00:17:35.004 fused_ordering(838) 00:17:35.004 fused_ordering(839) 00:17:35.004 fused_ordering(840) 00:17:35.004 fused_ordering(841) 00:17:35.004 fused_ordering(842) 00:17:35.004 fused_ordering(843) 00:17:35.004 fused_ordering(844) 00:17:35.004 fused_ordering(845) 00:17:35.004 fused_ordering(846) 00:17:35.004 fused_ordering(847) 00:17:35.004 fused_ordering(848) 00:17:35.004 fused_ordering(849) 00:17:35.004 fused_ordering(850) 00:17:35.004 fused_ordering(851) 00:17:35.004 fused_ordering(852) 00:17:35.004 fused_ordering(853) 00:17:35.004 fused_ordering(854) 00:17:35.004 fused_ordering(855) 00:17:35.004 fused_ordering(856) 00:17:35.004 fused_ordering(857) 00:17:35.004 fused_ordering(858) 00:17:35.004 fused_ordering(859) 00:17:35.004 fused_ordering(860) 00:17:35.004 fused_ordering(861) 00:17:35.004 fused_ordering(862) 00:17:35.004 fused_ordering(863) 00:17:35.004 fused_ordering(864) 00:17:35.004 fused_ordering(865) 00:17:35.004 fused_ordering(866) 00:17:35.004 fused_ordering(867) 00:17:35.004 fused_ordering(868) 00:17:35.004 fused_ordering(869) 00:17:35.004 fused_ordering(870) 00:17:35.004 fused_ordering(871) 00:17:35.004 fused_ordering(872) 00:17:35.004 fused_ordering(873) 00:17:35.004 fused_ordering(874) 00:17:35.004 fused_ordering(875) 00:17:35.004 fused_ordering(876) 00:17:35.004 fused_ordering(877) 00:17:35.004 fused_ordering(878) 00:17:35.004 fused_ordering(879) 00:17:35.004 fused_ordering(880) 00:17:35.004 fused_ordering(881) 00:17:35.004 fused_ordering(882) 00:17:35.004 fused_ordering(883) 00:17:35.004 fused_ordering(884) 00:17:35.004 fused_ordering(885) 00:17:35.004 fused_ordering(886) 00:17:35.004 fused_ordering(887) 00:17:35.004 fused_ordering(888) 00:17:35.004 fused_ordering(889) 00:17:35.004 fused_ordering(890) 00:17:35.004 fused_ordering(891) 00:17:35.004 fused_ordering(892) 00:17:35.004 fused_ordering(893) 00:17:35.004 fused_ordering(894) 00:17:35.004 fused_ordering(895) 00:17:35.004 fused_ordering(896) 00:17:35.004 fused_ordering(897) 00:17:35.004 fused_ordering(898) 00:17:35.004 fused_ordering(899) 00:17:35.004 fused_ordering(900) 00:17:35.004 fused_ordering(901) 00:17:35.004 fused_ordering(902) 00:17:35.004 fused_ordering(903) 00:17:35.004 fused_ordering(904) 00:17:35.004 fused_ordering(905) 00:17:35.004 fused_ordering(906) 00:17:35.004 fused_ordering(907) 00:17:35.004 fused_ordering(908) 00:17:35.004 fused_ordering(909) 00:17:35.004 fused_ordering(910) 00:17:35.004 fused_ordering(911) 00:17:35.004 fused_ordering(912) 00:17:35.004 fused_ordering(913) 00:17:35.004 fused_ordering(914) 00:17:35.004 fused_ordering(915) 00:17:35.004 fused_ordering(916) 00:17:35.004 fused_ordering(917) 00:17:35.004 fused_ordering(918) 00:17:35.004 fused_ordering(919) 00:17:35.004 fused_ordering(920) 00:17:35.004 fused_ordering(921) 00:17:35.004 fused_ordering(922) 00:17:35.004 fused_ordering(923) 00:17:35.004 fused_ordering(924) 00:17:35.004 fused_ordering(925) 00:17:35.004 fused_ordering(926) 00:17:35.004 fused_ordering(927) 00:17:35.004 fused_ordering(928) 00:17:35.004 fused_ordering(929) 00:17:35.004 fused_ordering(930) 00:17:35.004 fused_ordering(931) 00:17:35.004 fused_ordering(932) 00:17:35.004 fused_ordering(933) 00:17:35.004 fused_ordering(934) 00:17:35.004 fused_ordering(935) 00:17:35.004 fused_ordering(936) 00:17:35.004 fused_ordering(937) 00:17:35.004 fused_ordering(938) 00:17:35.004 fused_ordering(939) 00:17:35.004 fused_ordering(940) 00:17:35.004 fused_ordering(941) 00:17:35.004 fused_ordering(942) 00:17:35.004 fused_ordering(943) 00:17:35.004 fused_ordering(944) 00:17:35.004 fused_ordering(945) 00:17:35.004 fused_ordering(946) 00:17:35.004 fused_ordering(947) 00:17:35.004 fused_ordering(948) 00:17:35.004 fused_ordering(949) 00:17:35.004 fused_ordering(950) 00:17:35.004 fused_ordering(951) 00:17:35.004 fused_ordering(952) 00:17:35.004 fused_ordering(953) 00:17:35.004 fused_ordering(954) 00:17:35.004 fused_ordering(955) 00:17:35.004 fused_ordering(956) 00:17:35.004 fused_ordering(957) 00:17:35.004 fused_ordering(958) 00:17:35.004 fused_ordering(959) 00:17:35.004 fused_ordering(960) 00:17:35.004 fused_ordering(961) 00:17:35.004 fused_ordering(962) 00:17:35.004 fused_ordering(963) 00:17:35.004 fused_ordering(964) 00:17:35.004 fused_ordering(965) 00:17:35.004 fused_ordering(966) 00:17:35.004 fused_ordering(967) 00:17:35.004 fused_ordering(968) 00:17:35.004 fused_ordering(969) 00:17:35.004 fused_ordering(970) 00:17:35.004 fused_ordering(971) 00:17:35.004 fused_ordering(972) 00:17:35.004 fused_ordering(973) 00:17:35.004 fused_ordering(974) 00:17:35.004 fused_ordering(975) 00:17:35.004 fused_ordering(976) 00:17:35.005 fused_ordering(977) 00:17:35.005 fused_ordering(978) 00:17:35.005 fused_ordering(979) 00:17:35.005 fused_ordering(980) 00:17:35.005 fused_ordering(981) 00:17:35.005 fused_ordering(982) 00:17:35.005 fused_ordering(983) 00:17:35.005 fused_ordering(984) 00:17:35.005 fused_ordering(985) 00:17:35.005 fused_ordering(986) 00:17:35.005 fused_ordering(987) 00:17:35.005 fused_ordering(988) 00:17:35.005 fused_ordering(989) 00:17:35.005 fused_ordering(990) 00:17:35.005 fused_ordering(991) 00:17:35.005 fused_ordering(992) 00:17:35.005 fused_ordering(993) 00:17:35.005 fused_ordering(994) 00:17:35.005 fused_ordering(995) 00:17:35.005 fused_ordering(996) 00:17:35.005 fused_ordering(997) 00:17:35.005 fused_ordering(998) 00:17:35.005 fused_ordering(999) 00:17:35.005 fused_ordering(1000) 00:17:35.005 fused_ordering(1001) 00:17:35.005 fused_ordering(1002) 00:17:35.005 fused_ordering(1003) 00:17:35.005 fused_ordering(1004) 00:17:35.005 fused_ordering(1005) 00:17:35.005 fused_ordering(1006) 00:17:35.005 fused_ordering(1007) 00:17:35.005 fused_ordering(1008) 00:17:35.005 fused_ordering(1009) 00:17:35.005 fused_ordering(1010) 00:17:35.005 fused_ordering(1011) 00:17:35.005 fused_ordering(1012) 00:17:35.005 fused_ordering(1013) 00:17:35.005 fused_ordering(1014) 00:17:35.005 fused_ordering(1015) 00:17:35.005 fused_ordering(1016) 00:17:35.005 fused_ordering(1017) 00:17:35.005 fused_ordering(1018) 00:17:35.005 fused_ordering(1019) 00:17:35.005 fused_ordering(1020) 00:17:35.005 fused_ordering(1021) 00:17:35.005 fused_ordering(1022) 00:17:35.005 fused_ordering(1023) 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.005 rmmod nvme_tcp 00:17:35.005 rmmod nvme_fabrics 00:17:35.005 rmmod nvme_keyring 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 3328968 ']' 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 3328968 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3328968 ']' 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3328968 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:35.005 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.263 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3328968 00:17:35.263 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:35.263 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:35.263 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3328968' 00:17:35.263 killing process with pid 3328968 00:17:35.263 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3328968 00:17:35.263 05:46:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3328968 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.263 05:46:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.798 00:17:37.798 real 0m10.402s 00:17:37.798 user 0m4.789s 00:17:37.798 sys 0m5.660s 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.798 ************************************ 00:17:37.798 END TEST nvmf_fused_ordering 00:17:37.798 ************************************ 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.798 ************************************ 00:17:37.798 START TEST nvmf_ns_masking 00:17:37.798 ************************************ 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:37.798 * Looking for test storage... 00:17:37.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:37.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.798 --rc genhtml_branch_coverage=1 00:17:37.798 --rc genhtml_function_coverage=1 00:17:37.798 --rc genhtml_legend=1 00:17:37.798 --rc geninfo_all_blocks=1 00:17:37.798 --rc geninfo_unexecuted_blocks=1 00:17:37.798 00:17:37.798 ' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:37.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.798 --rc genhtml_branch_coverage=1 00:17:37.798 --rc genhtml_function_coverage=1 00:17:37.798 --rc genhtml_legend=1 00:17:37.798 --rc geninfo_all_blocks=1 00:17:37.798 --rc geninfo_unexecuted_blocks=1 00:17:37.798 00:17:37.798 ' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:37.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.798 --rc genhtml_branch_coverage=1 00:17:37.798 --rc genhtml_function_coverage=1 00:17:37.798 --rc genhtml_legend=1 00:17:37.798 --rc geninfo_all_blocks=1 00:17:37.798 --rc geninfo_unexecuted_blocks=1 00:17:37.798 00:17:37.798 ' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:37.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.798 --rc genhtml_branch_coverage=1 00:17:37.798 --rc genhtml_function_coverage=1 00:17:37.798 --rc genhtml_legend=1 00:17:37.798 --rc geninfo_all_blocks=1 00:17:37.798 --rc geninfo_unexecuted_blocks=1 00:17:37.798 00:17:37.798 ' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=18b82b45-6981-43cf-982a-bad3f0947944 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:37.798 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1fe3e8fe-8c53-4ae6-95f5-32f36a6bbf1d 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=85a606d3-673a-450e-a09b-5a17e070c679 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.799 05:46:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:43.068 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:43.068 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:43.068 Found net devices under 0000:af:00.0: cvl_0_0 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:43.068 Found net devices under 0000:af:00.1: cvl_0_1 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.068 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.327 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.327 05:46:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.327 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.327 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.327 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.327 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.327 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.327 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:17:43.327 00:17:43.327 --- 10.0.0.2 ping statistics --- 00:17:43.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.327 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:17:43.327 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:17:43.586 00:17:43.586 --- 10.0.0.1 ping statistics --- 00:17:43.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.586 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=3332714 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 3332714 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3332714 ']' 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.586 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:43.586 [2024-12-16 05:46:17.288760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:43.586 [2024-12-16 05:46:17.288815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.586 [2024-12-16 05:46:17.349924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.586 [2024-12-16 05:46:17.389389] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.586 [2024-12-16 05:46:17.389432] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.586 [2024-12-16 05:46:17.389439] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.586 [2024-12-16 05:46:17.389445] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.586 [2024-12-16 05:46:17.389450] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.586 [2024-12-16 05:46:17.389468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:43.873 [2024-12-16 05:46:17.683724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:43.873 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:44.170 Malloc1 00:17:44.170 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:44.441 Malloc2 00:17:44.441 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:44.717 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:44.717 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.986 [2024-12-16 05:46:18.653828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.986 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:44.986 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85a606d3-673a-450e-a09b-5a17e070c679 -a 10.0.0.2 -s 4420 -i 4 00:17:44.986 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:44.986 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:44.986 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.986 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:44.986 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.515 [ 0]:0x1 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b17581c10b047e09f74aa5654d03811 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b17581c10b047e09f74aa5654d03811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.515 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:47.515 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:47.515 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.515 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.515 [ 0]:0x1 00:17:47.515 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.515 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.515 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b17581c10b047e09f74aa5654d03811 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b17581c10b047e09f74aa5654d03811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.516 [ 1]:0x2 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=910902d0031046f3bc3d54c78e8614e0 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 910902d0031046f3bc3d54c78e8614e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.516 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.774 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85a606d3-673a-450e-a09b-5a17e070c679 -a 10.0.0.2 -s 4420 -i 4 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:48.032 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.559 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.559 [ 0]:0x2 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=910902d0031046f3bc3d54c78e8614e0 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 910902d0031046f3bc3d54c78e8614e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.559 [ 0]:0x1 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b17581c10b047e09f74aa5654d03811 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b17581c10b047e09f74aa5654d03811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.559 [ 1]:0x2 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=910902d0031046f3bc3d54c78e8614e0 00:17:50.559 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 910902d0031046f3bc3d54c78e8614e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.560 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.817 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.818 [ 0]:0x2 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=910902d0031046f3bc3d54c78e8614e0 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 910902d0031046f3bc3d54c78e8614e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:50.818 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.076 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:51.076 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:51.076 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 85a606d3-673a-450e-a09b-5a17e070c679 -a 10.0.0.2 -s 4420 -i 4 00:17:51.334 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:51.334 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:51.334 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.334 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:51.334 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:51.334 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:53.230 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:53.231 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:53.231 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:53.231 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.231 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:53.231 [ 0]:0x1 00:17:53.231 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b17581c10b047e09f74aa5654d03811 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b17581c10b047e09f74aa5654d03811 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:53.488 [ 1]:0x2 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=910902d0031046f3bc3d54c78e8614e0 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 910902d0031046f3bc3d54c78e8614e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.488 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:53.746 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.747 [ 0]:0x2 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=910902d0031046f3bc3d54c78e8614e0 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 910902d0031046f3bc3d54c78e8614e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:53.747 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:54.005 [2024-12-16 05:46:27.627350] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:54.005 request: 00:17:54.005 { 00:17:54.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.005 "nsid": 2, 00:17:54.005 "host": "nqn.2016-06.io.spdk:host1", 00:17:54.005 "method": "nvmf_ns_remove_host", 00:17:54.005 "req_id": 1 00:17:54.005 } 00:17:54.005 Got JSON-RPC error response 00:17:54.005 response: 00:17:54.005 { 00:17:54.005 "code": -32602, 00:17:54.005 "message": "Invalid parameters" 00:17:54.005 } 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:54.005 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.006 [ 0]:0x2 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=910902d0031046f3bc3d54c78e8614e0 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 910902d0031046f3bc3d54c78e8614e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:54.006 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3334639 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3334639 /var/tmp/host.sock 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3334639 ']' 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:54.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.264 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.264 [2024-12-16 05:46:27.983410] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:54.264 [2024-12-16 05:46:27.983457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334639 ] 00:17:54.265 [2024-12-16 05:46:28.038397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.265 [2024-12-16 05:46:28.076729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.523 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.523 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:54.523 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.781 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:55.039 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 18b82b45-6981-43cf-982a-bad3f0947944 00:17:55.040 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:55.040 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 18B82B45698143CF982ABAD3F0947944 -i 00:17:55.040 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1fe3e8fe-8c53-4ae6-95f5-32f36a6bbf1d 00:17:55.040 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:55.040 05:46:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1FE3E8FE8C534AE695F532F36A6BBF1D -i 00:17:55.299 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:55.559 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:55.559 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:55.559 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:56.127 nvme0n1 00:17:56.127 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:56.127 05:46:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:56.385 nvme1n2 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:56.385 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:56.644 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 18b82b45-6981-43cf-982a-bad3f0947944 == \1\8\b\8\2\b\4\5\-\6\9\8\1\-\4\3\c\f\-\9\8\2\a\-\b\a\d\3\f\0\9\4\7\9\4\4 ]] 00:17:56.644 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:56.644 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:56.644 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1fe3e8fe-8c53-4ae6-95f5-32f36a6bbf1d == \1\f\e\3\e\8\f\e\-\8\c\5\3\-\4\a\e\6\-\9\5\f\5\-\3\2\f\3\6\a\6\b\b\f\1\d ]] 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3334639 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3334639 ']' 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3334639 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3334639 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3334639' 00:17:56.904 killing process with pid 3334639 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3334639 00:17:56.904 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3334639 00:17:57.163 05:46:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.422 rmmod nvme_tcp 00:17:57.422 rmmod nvme_fabrics 00:17:57.422 rmmod nvme_keyring 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 3332714 ']' 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 3332714 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3332714 ']' 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3332714 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.422 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3332714 00:17:57.681 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3332714' 00:17:57.682 killing process with pid 3332714 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3332714 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3332714 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.682 05:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:00.218 00:18:00.218 real 0m22.346s 00:18:00.218 user 0m23.463s 00:18:00.218 sys 0m6.454s 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:00.218 ************************************ 00:18:00.218 END TEST nvmf_ns_masking 00:18:00.218 ************************************ 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.218 ************************************ 00:18:00.218 START TEST nvmf_nvme_cli 00:18:00.218 ************************************ 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:00.218 * Looking for test storage... 00:18:00.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.218 --rc genhtml_branch_coverage=1 00:18:00.218 --rc genhtml_function_coverage=1 00:18:00.218 --rc genhtml_legend=1 00:18:00.218 --rc geninfo_all_blocks=1 00:18:00.218 --rc geninfo_unexecuted_blocks=1 00:18:00.218 00:18:00.218 ' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.218 --rc genhtml_branch_coverage=1 00:18:00.218 --rc genhtml_function_coverage=1 00:18:00.218 --rc genhtml_legend=1 00:18:00.218 --rc geninfo_all_blocks=1 00:18:00.218 --rc geninfo_unexecuted_blocks=1 00:18:00.218 00:18:00.218 ' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.218 --rc genhtml_branch_coverage=1 00:18:00.218 --rc genhtml_function_coverage=1 00:18:00.218 --rc genhtml_legend=1 00:18:00.218 --rc geninfo_all_blocks=1 00:18:00.218 --rc geninfo_unexecuted_blocks=1 00:18:00.218 00:18:00.218 ' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:00.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.218 --rc genhtml_branch_coverage=1 00:18:00.218 --rc genhtml_function_coverage=1 00:18:00.218 --rc genhtml_legend=1 00:18:00.218 --rc geninfo_all_blocks=1 00:18:00.218 --rc geninfo_unexecuted_blocks=1 00:18:00.218 00:18:00.218 ' 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.218 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.219 05:46:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:05.486 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:05.486 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:05.487 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:05.487 Found net devices under 0000:af:00.0: cvl_0_0 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:05.487 Found net devices under 0000:af:00.1: cvl_0_1 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.487 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:05.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:18:05.745 00:18:05.745 --- 10.0.0.2 ping statistics --- 00:18:05.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.745 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:18:05.745 00:18:05.745 --- 10.0.0.1 ping statistics --- 00:18:05.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.745 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=3338657 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 3338657 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3338657 ']' 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.745 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:05.745 [2024-12-16 05:46:39.463342] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:05.745 [2024-12-16 05:46:39.463394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.745 [2024-12-16 05:46:39.524435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.745 [2024-12-16 05:46:39.568211] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.745 [2024-12-16 05:46:39.568250] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.745 [2024-12-16 05:46:39.568257] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.745 [2024-12-16 05:46:39.568263] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.745 [2024-12-16 05:46:39.568268] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.745 [2024-12-16 05:46:39.568316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.745 [2024-12-16 05:46:39.568412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.745 [2024-12-16 05:46:39.568501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.745 [2024-12-16 05:46:39.568502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.003 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.003 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:06.003 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:06.003 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 [2024-12-16 05:46:39.709499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 Malloc0 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 Malloc1 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 [2024-12-16 05:46:39.786533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.004 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:06.261 00:18:06.261 Discovery Log Number of Records 2, Generation counter 2 00:18:06.261 =====Discovery Log Entry 0====== 00:18:06.261 trtype: tcp 00:18:06.261 adrfam: ipv4 00:18:06.261 subtype: current discovery subsystem 00:18:06.261 treq: not required 00:18:06.261 portid: 0 00:18:06.261 trsvcid: 4420 00:18:06.261 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:06.261 traddr: 10.0.0.2 00:18:06.261 eflags: explicit discovery connections, duplicate discovery information 00:18:06.261 sectype: none 00:18:06.261 =====Discovery Log Entry 1====== 00:18:06.261 trtype: tcp 00:18:06.261 adrfam: ipv4 00:18:06.261 subtype: nvme subsystem 00:18:06.261 treq: not required 00:18:06.261 portid: 0 00:18:06.261 trsvcid: 4420 00:18:06.261 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:06.261 traddr: 10.0.0.2 00:18:06.261 eflags: none 00:18:06.261 sectype: none 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:06.261 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:07.193 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:07.193 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:07.193 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.193 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:07.193 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:07.193 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:09.718 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:09.719 /dev/nvme0n2 ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:09.719 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:09.977 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.978 rmmod nvme_tcp 00:18:09.978 rmmod nvme_fabrics 00:18:09.978 rmmod nvme_keyring 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 3338657 ']' 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 3338657 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3338657 ']' 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3338657 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3338657 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3338657' 00:18:09.978 killing process with pid 3338657 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3338657 00:18:09.978 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3338657 00:18:10.236 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:10.236 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:10.236 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:10.236 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:10.236 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:18:10.236 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:10.236 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:18:10.236 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.236 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:10.236 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.236 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.236 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:12.769 00:18:12.769 real 0m12.436s 00:18:12.769 user 0m19.142s 00:18:12.769 sys 0m4.831s 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:12.769 ************************************ 00:18:12.769 END TEST nvmf_nvme_cli 00:18:12.769 ************************************ 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.769 ************************************ 00:18:12.769 START TEST nvmf_vfio_user 00:18:12.769 ************************************ 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:12.769 * Looking for test storage... 00:18:12.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:12.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.769 --rc genhtml_branch_coverage=1 00:18:12.769 --rc genhtml_function_coverage=1 00:18:12.769 --rc genhtml_legend=1 00:18:12.769 --rc geninfo_all_blocks=1 00:18:12.769 --rc geninfo_unexecuted_blocks=1 00:18:12.769 00:18:12.769 ' 00:18:12.769 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:12.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.770 --rc genhtml_branch_coverage=1 00:18:12.770 --rc genhtml_function_coverage=1 00:18:12.770 --rc genhtml_legend=1 00:18:12.770 --rc geninfo_all_blocks=1 00:18:12.770 --rc geninfo_unexecuted_blocks=1 00:18:12.770 00:18:12.770 ' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:12.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.770 --rc genhtml_branch_coverage=1 00:18:12.770 --rc genhtml_function_coverage=1 00:18:12.770 --rc genhtml_legend=1 00:18:12.770 --rc geninfo_all_blocks=1 00:18:12.770 --rc geninfo_unexecuted_blocks=1 00:18:12.770 00:18:12.770 ' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:12.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.770 --rc genhtml_branch_coverage=1 00:18:12.770 --rc genhtml_function_coverage=1 00:18:12.770 --rc genhtml_legend=1 00:18:12.770 --rc geninfo_all_blocks=1 00:18:12.770 --rc geninfo_unexecuted_blocks=1 00:18:12.770 00:18:12.770 ' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3339954 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3339954' 00:18:12.770 Process pid: 3339954 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3339954 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3339954 ']' 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:12.770 [2024-12-16 05:46:46.412427] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:12.770 [2024-12-16 05:46:46.412479] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.770 [2024-12-16 05:46:46.470165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:12.770 [2024-12-16 05:46:46.510270] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.770 [2024-12-16 05:46:46.510312] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.770 [2024-12-16 05:46:46.510319] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.770 [2024-12-16 05:46:46.510325] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.770 [2024-12-16 05:46:46.510331] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.770 [2024-12-16 05:46:46.510388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.770 [2024-12-16 05:46:46.510489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.770 [2024-12-16 05:46:46.510575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.770 [2024-12-16 05:46:46.510575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:12.770 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:14.141 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:14.141 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:14.141 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:14.141 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:14.141 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:14.141 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:14.398 Malloc1 00:18:14.398 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:14.656 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:14.656 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:14.913 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:14.913 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:14.913 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:15.170 Malloc2 00:18:15.170 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:15.427 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:15.428 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:15.685 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:15.685 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:15.685 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:15.685 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:15.685 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:15.685 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:15.685 [2024-12-16 05:46:49.487257] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:15.685 [2024-12-16 05:46:49.487291] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340525 ] 00:18:15.685 [2024-12-16 05:46:49.513951] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:15.685 [2024-12-16 05:46:49.526156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:15.685 [2024-12-16 05:46:49.526175] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff2df318000 00:18:15.685 [2024-12-16 05:46:49.527161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.528162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.529169] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.530176] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.531186] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.532194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.533188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.534200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:15.685 [2024-12-16 05:46:49.535208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:15.685 [2024-12-16 05:46:49.535218] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff2de022000 00:18:15.685 [2024-12-16 05:46:49.536150] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:15.944 [2024-12-16 05:46:49.545582] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:15.944 [2024-12-16 05:46:49.545607] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:15.944 [2024-12-16 05:46:49.550289] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:15.944 [2024-12-16 05:46:49.550329] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:15.944 [2024-12-16 05:46:49.550403] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:15.944 [2024-12-16 05:46:49.550421] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:15.944 [2024-12-16 05:46:49.550426] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:15.944 [2024-12-16 05:46:49.551293] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:15.944 [2024-12-16 05:46:49.551305] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:15.944 [2024-12-16 05:46:49.551312] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:15.944 [2024-12-16 05:46:49.552296] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:15.944 [2024-12-16 05:46:49.552303] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:15.944 [2024-12-16 05:46:49.552309] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:15.944 [2024-12-16 05:46:49.553304] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:15.944 [2024-12-16 05:46:49.553311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:15.944 [2024-12-16 05:46:49.554312] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:15.944 [2024-12-16 05:46:49.554319] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:15.944 [2024-12-16 05:46:49.554324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:15.944 [2024-12-16 05:46:49.554330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:15.944 [2024-12-16 05:46:49.554435] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:15.944 [2024-12-16 05:46:49.554439] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:15.944 [2024-12-16 05:46:49.554444] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:15.944 [2024-12-16 05:46:49.555314] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:15.944 [2024-12-16 05:46:49.556318] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:15.944 [2024-12-16 05:46:49.557329] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:15.944 [2024-12-16 05:46:49.558326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:15.944 [2024-12-16 05:46:49.558407] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:15.944 [2024-12-16 05:46:49.559340] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:15.944 [2024-12-16 05:46:49.559347] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:15.944 [2024-12-16 05:46:49.559352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:15.944 [2024-12-16 05:46:49.559368] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:15.944 [2024-12-16 05:46:49.559375] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:15.944 [2024-12-16 05:46:49.559389] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:15.944 [2024-12-16 05:46:49.559396] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.944 [2024-12-16 05:46:49.559400] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.945 [2024-12-16 05:46:49.559413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559465] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:15.945 [2024-12-16 05:46:49.559469] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:15.945 [2024-12-16 05:46:49.559473] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:15.945 [2024-12-16 05:46:49.559477] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:15.945 [2024-12-16 05:46:49.559482] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:15.945 [2024-12-16 05:46:49.559486] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:15.945 [2024-12-16 05:46:49.559490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.945 [2024-12-16 05:46:49.559535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.945 [2024-12-16 05:46:49.559542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.945 [2024-12-16 05:46:49.559550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.945 [2024-12-16 05:46:49.559554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559585] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:15.945 [2024-12-16 05:46:49.559590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559596] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559672] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559685] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:15.945 [2024-12-16 05:46:49.559689] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:15.945 [2024-12-16 05:46:49.559692] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.945 [2024-12-16 05:46:49.559698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559716] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:15.945 [2024-12-16 05:46:49.559727] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559734] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559740] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:15.945 [2024-12-16 05:46:49.559743] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.945 [2024-12-16 05:46:49.559746] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.945 [2024-12-16 05:46:49.559752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559799] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:15.945 [2024-12-16 05:46:49.559803] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.945 [2024-12-16 05:46:49.559806] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.945 [2024-12-16 05:46:49.559812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559833] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559840] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559852] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559857] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559867] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559871] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:15.945 [2024-12-16 05:46:49.559875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:15.945 [2024-12-16 05:46:49.559880] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:15.945 [2024-12-16 05:46:49.559897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.559965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:15.945 [2024-12-16 05:46:49.559977] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:15.945 [2024-12-16 05:46:49.559981] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:15.945 [2024-12-16 05:46:49.559984] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:15.945 [2024-12-16 05:46:49.559987] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:15.945 [2024-12-16 05:46:49.559990] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:15.945 [2024-12-16 05:46:49.559996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:15.945 [2024-12-16 05:46:49.560002] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:15.945 [2024-12-16 05:46:49.560006] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:15.945 [2024-12-16 05:46:49.560009] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.945 [2024-12-16 05:46:49.560015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.560021] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:15.945 [2024-12-16 05:46:49.560024] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:15.945 [2024-12-16 05:46:49.560029] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.945 [2024-12-16 05:46:49.560034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:15.945 [2024-12-16 05:46:49.560041] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:15.945 [2024-12-16 05:46:49.560045] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:15.945 [2024-12-16 05:46:49.560048] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:15.945 [2024-12-16 05:46:49.560053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:15.946 [2024-12-16 05:46:49.560059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:15.946 [2024-12-16 05:46:49.560070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:15.946 [2024-12-16 05:46:49.560080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:15.946 [2024-12-16 05:46:49.560086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:15.946 ===================================================== 00:18:15.946 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:15.946 ===================================================== 00:18:15.946 Controller Capabilities/Features 00:18:15.946 ================================ 00:18:15.946 Vendor ID: 4e58 00:18:15.946 Subsystem Vendor ID: 4e58 00:18:15.946 Serial Number: SPDK1 00:18:15.946 Model Number: SPDK bdev Controller 00:18:15.946 Firmware Version: 24.09.1 00:18:15.946 Recommended Arb Burst: 6 00:18:15.946 IEEE OUI Identifier: 8d 6b 50 00:18:15.946 Multi-path I/O 00:18:15.946 May have multiple subsystem ports: Yes 00:18:15.946 May have multiple controllers: Yes 00:18:15.946 Associated with SR-IOV VF: No 00:18:15.946 Max Data Transfer Size: 131072 00:18:15.946 Max Number of Namespaces: 32 00:18:15.946 Max Number of I/O Queues: 127 00:18:15.946 NVMe Specification Version (VS): 1.3 00:18:15.946 NVMe Specification Version (Identify): 1.3 00:18:15.946 Maximum Queue Entries: 256 00:18:15.946 Contiguous Queues Required: Yes 00:18:15.946 Arbitration Mechanisms Supported 00:18:15.946 Weighted Round Robin: Not Supported 00:18:15.946 Vendor Specific: Not Supported 00:18:15.946 Reset Timeout: 15000 ms 00:18:15.946 Doorbell Stride: 4 bytes 00:18:15.946 NVM Subsystem Reset: Not Supported 00:18:15.946 Command Sets Supported 00:18:15.946 NVM Command Set: Supported 00:18:15.946 Boot Partition: Not Supported 00:18:15.946 Memory Page Size Minimum: 4096 bytes 00:18:15.946 Memory Page Size Maximum: 4096 bytes 00:18:15.946 Persistent Memory Region: Not Supported 00:18:15.946 Optional Asynchronous Events Supported 00:18:15.946 Namespace Attribute Notices: Supported 00:18:15.946 Firmware Activation Notices: Not Supported 00:18:15.946 ANA Change Notices: Not Supported 00:18:15.946 PLE Aggregate Log Change Notices: Not Supported 00:18:15.946 LBA Status Info Alert Notices: Not Supported 00:18:15.946 EGE Aggregate Log Change Notices: Not Supported 00:18:15.946 Normal NVM Subsystem Shutdown event: Not Supported 00:18:15.946 Zone Descriptor Change Notices: Not Supported 00:18:15.946 Discovery Log Change Notices: Not Supported 00:18:15.946 Controller Attributes 00:18:15.946 128-bit Host Identifier: Supported 00:18:15.946 Non-Operational Permissive Mode: Not Supported 00:18:15.946 NVM Sets: Not Supported 00:18:15.946 Read Recovery Levels: Not Supported 00:18:15.946 Endurance Groups: Not Supported 00:18:15.946 Predictable Latency Mode: Not Supported 00:18:15.946 Traffic Based Keep ALive: Not Supported 00:18:15.946 Namespace Granularity: Not Supported 00:18:15.946 SQ Associations: Not Supported 00:18:15.946 UUID List: Not Supported 00:18:15.946 Multi-Domain Subsystem: Not Supported 00:18:15.946 Fixed Capacity Management: Not Supported 00:18:15.946 Variable Capacity Management: Not Supported 00:18:15.946 Delete Endurance Group: Not Supported 00:18:15.946 Delete NVM Set: Not Supported 00:18:15.946 Extended LBA Formats Supported: Not Supported 00:18:15.946 Flexible Data Placement Supported: Not Supported 00:18:15.946 00:18:15.946 Controller Memory Buffer Support 00:18:15.946 ================================ 00:18:15.946 Supported: No 00:18:15.946 00:18:15.946 Persistent Memory Region Support 00:18:15.946 ================================ 00:18:15.946 Supported: No 00:18:15.946 00:18:15.946 Admin Command Set Attributes 00:18:15.946 ============================ 00:18:15.946 Security Send/Receive: Not Supported 00:18:15.946 Format NVM: Not Supported 00:18:15.946 Firmware Activate/Download: Not Supported 00:18:15.946 Namespace Management: Not Supported 00:18:15.946 Device Self-Test: Not Supported 00:18:15.946 Directives: Not Supported 00:18:15.946 NVMe-MI: Not Supported 00:18:15.946 Virtualization Management: Not Supported 00:18:15.946 Doorbell Buffer Config: Not Supported 00:18:15.946 Get LBA Status Capability: Not Supported 00:18:15.946 Command & Feature Lockdown Capability: Not Supported 00:18:15.946 Abort Command Limit: 4 00:18:15.946 Async Event Request Limit: 4 00:18:15.946 Number of Firmware Slots: N/A 00:18:15.946 Firmware Slot 1 Read-Only: N/A 00:18:15.946 Firmware Activation Without Reset: N/A 00:18:15.946 Multiple Update Detection Support: N/A 00:18:15.946 Firmware Update Granularity: No Information Provided 00:18:15.946 Per-Namespace SMART Log: No 00:18:15.946 Asymmetric Namespace Access Log Page: Not Supported 00:18:15.946 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:15.946 Command Effects Log Page: Supported 00:18:15.946 Get Log Page Extended Data: Supported 00:18:15.946 Telemetry Log Pages: Not Supported 00:18:15.946 Persistent Event Log Pages: Not Supported 00:18:15.946 Supported Log Pages Log Page: May Support 00:18:15.946 Commands Supported & Effects Log Page: Not Supported 00:18:15.946 Feature Identifiers & Effects Log Page:May Support 00:18:15.946 NVMe-MI Commands & Effects Log Page: May Support 00:18:15.946 Data Area 4 for Telemetry Log: Not Supported 00:18:15.946 Error Log Page Entries Supported: 128 00:18:15.946 Keep Alive: Supported 00:18:15.946 Keep Alive Granularity: 10000 ms 00:18:15.946 00:18:15.946 NVM Command Set Attributes 00:18:15.946 ========================== 00:18:15.946 Submission Queue Entry Size 00:18:15.946 Max: 64 00:18:15.946 Min: 64 00:18:15.946 Completion Queue Entry Size 00:18:15.946 Max: 16 00:18:15.946 Min: 16 00:18:15.946 Number of Namespaces: 32 00:18:15.946 Compare Command: Supported 00:18:15.946 Write Uncorrectable Command: Not Supported 00:18:15.946 Dataset Management Command: Supported 00:18:15.946 Write Zeroes Command: Supported 00:18:15.946 Set Features Save Field: Not Supported 00:18:15.946 Reservations: Not Supported 00:18:15.946 Timestamp: Not Supported 00:18:15.946 Copy: Supported 00:18:15.946 Volatile Write Cache: Present 00:18:15.946 Atomic Write Unit (Normal): 1 00:18:15.946 Atomic Write Unit (PFail): 1 00:18:15.946 Atomic Compare & Write Unit: 1 00:18:15.946 Fused Compare & Write: Supported 00:18:15.946 Scatter-Gather List 00:18:15.946 SGL Command Set: Supported (Dword aligned) 00:18:15.946 SGL Keyed: Not Supported 00:18:15.946 SGL Bit Bucket Descriptor: Not Supported 00:18:15.946 SGL Metadata Pointer: Not Supported 00:18:15.946 Oversized SGL: Not Supported 00:18:15.946 SGL Metadata Address: Not Supported 00:18:15.946 SGL Offset: Not Supported 00:18:15.946 Transport SGL Data Block: Not Supported 00:18:15.946 Replay Protected Memory Block: Not Supported 00:18:15.946 00:18:15.946 Firmware Slot Information 00:18:15.946 ========================= 00:18:15.946 Active slot: 1 00:18:15.946 Slot 1 Firmware Revision: 24.09.1 00:18:15.946 00:18:15.946 00:18:15.946 Commands Supported and Effects 00:18:15.946 ============================== 00:18:15.946 Admin Commands 00:18:15.946 -------------- 00:18:15.946 Get Log Page (02h): Supported 00:18:15.946 Identify (06h): Supported 00:18:15.946 Abort (08h): Supported 00:18:15.946 Set Features (09h): Supported 00:18:15.946 Get Features (0Ah): Supported 00:18:15.946 Asynchronous Event Request (0Ch): Supported 00:18:15.946 Keep Alive (18h): Supported 00:18:15.946 I/O Commands 00:18:15.946 ------------ 00:18:15.946 Flush (00h): Supported LBA-Change 00:18:15.946 Write (01h): Supported LBA-Change 00:18:15.946 Read (02h): Supported 00:18:15.946 Compare (05h): Supported 00:18:15.946 Write Zeroes (08h): Supported LBA-Change 00:18:15.946 Dataset Management (09h): Supported LBA-Change 00:18:15.946 Copy (19h): Supported LBA-Change 00:18:15.946 00:18:15.946 Error Log 00:18:15.946 ========= 00:18:15.946 00:18:15.946 Arbitration 00:18:15.946 =========== 00:18:15.946 Arbitration Burst: 1 00:18:15.946 00:18:15.946 Power Management 00:18:15.946 ================ 00:18:15.946 Number of Power States: 1 00:18:15.946 Current Power State: Power State #0 00:18:15.946 Power State #0: 00:18:15.946 Max Power: 0.00 W 00:18:15.946 Non-Operational State: Operational 00:18:15.946 Entry Latency: Not Reported 00:18:15.946 Exit Latency: Not Reported 00:18:15.946 Relative Read Throughput: 0 00:18:15.946 Relative Read Latency: 0 00:18:15.946 Relative Write Throughput: 0 00:18:15.946 Relative Write Latency: 0 00:18:15.946 Idle Power: Not Reported 00:18:15.946 Active Power: Not Reported 00:18:15.946 Non-Operational Permissive Mode: Not Supported 00:18:15.946 00:18:15.946 Health Information 00:18:15.946 ================== 00:18:15.946 Critical Warnings: 00:18:15.946 Available Spare Space: OK 00:18:15.946 Temperature: OK 00:18:15.946 Device Reliability: OK 00:18:15.946 Read Only: No 00:18:15.946 Volatile Memory Backup: OK 00:18:15.946 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:15.946 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:15.946 Available Spare: 0% 00:18:15.947 Availabl[2024-12-16 05:46:49.560168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:15.947 [2024-12-16 05:46:49.560176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:15.947 [2024-12-16 05:46:49.560198] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:15.947 [2024-12-16 05:46:49.560207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.947 [2024-12-16 05:46:49.560213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.947 [2024-12-16 05:46:49.560218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.947 [2024-12-16 05:46:49.560224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.947 [2024-12-16 05:46:49.560345] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:15.947 [2024-12-16 05:46:49.560355] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:15.947 [2024-12-16 05:46:49.561349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:15.947 [2024-12-16 05:46:49.561397] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:15.947 [2024-12-16 05:46:49.561403] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:15.947 [2024-12-16 05:46:49.562357] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:15.947 [2024-12-16 05:46:49.562367] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:15.947 [2024-12-16 05:46:49.562419] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:15.947 [2024-12-16 05:46:49.564853] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:15.947 e Spare Threshold: 0% 00:18:15.947 Life Percentage Used: 0% 00:18:15.947 Data Units Read: 0 00:18:15.947 Data Units Written: 0 00:18:15.947 Host Read Commands: 0 00:18:15.947 Host Write Commands: 0 00:18:15.947 Controller Busy Time: 0 minutes 00:18:15.947 Power Cycles: 0 00:18:15.947 Power On Hours: 0 hours 00:18:15.947 Unsafe Shutdowns: 0 00:18:15.947 Unrecoverable Media Errors: 0 00:18:15.947 Lifetime Error Log Entries: 0 00:18:15.947 Warning Temperature Time: 0 minutes 00:18:15.947 Critical Temperature Time: 0 minutes 00:18:15.947 00:18:15.947 Number of Queues 00:18:15.947 ================ 00:18:15.947 Number of I/O Submission Queues: 127 00:18:15.947 Number of I/O Completion Queues: 127 00:18:15.947 00:18:15.947 Active Namespaces 00:18:15.947 ================= 00:18:15.947 Namespace ID:1 00:18:15.947 Error Recovery Timeout: Unlimited 00:18:15.947 Command Set Identifier: NVM (00h) 00:18:15.947 Deallocate: Supported 00:18:15.947 Deallocated/Unwritten Error: Not Supported 00:18:15.947 Deallocated Read Value: Unknown 00:18:15.947 Deallocate in Write Zeroes: Not Supported 00:18:15.947 Deallocated Guard Field: 0xFFFF 00:18:15.947 Flush: Supported 00:18:15.947 Reservation: Supported 00:18:15.947 Namespace Sharing Capabilities: Multiple Controllers 00:18:15.947 Size (in LBAs): 131072 (0GiB) 00:18:15.947 Capacity (in LBAs): 131072 (0GiB) 00:18:15.947 Utilization (in LBAs): 131072 (0GiB) 00:18:15.947 NGUID: 620D6963E5D449B78FA91A3F64BD26E8 00:18:15.947 UUID: 620d6963-e5d4-49b7-8fa9-1a3f64bd26e8 00:18:15.947 Thin Provisioning: Not Supported 00:18:15.947 Per-NS Atomic Units: Yes 00:18:15.947 Atomic Boundary Size (Normal): 0 00:18:15.947 Atomic Boundary Size (PFail): 0 00:18:15.947 Atomic Boundary Offset: 0 00:18:15.947 Maximum Single Source Range Length: 65535 00:18:15.947 Maximum Copy Length: 65535 00:18:15.947 Maximum Source Range Count: 1 00:18:15.947 NGUID/EUI64 Never Reused: No 00:18:15.947 Namespace Write Protected: No 00:18:15.947 Number of LBA Formats: 1 00:18:15.947 Current LBA Format: LBA Format #00 00:18:15.947 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:15.947 00:18:15.947 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:15.947 [2024-12-16 05:46:49.782332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:21.204 Initializing NVMe Controllers 00:18:21.204 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:21.204 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:21.204 Initialization complete. Launching workers. 00:18:21.204 ======================================================== 00:18:21.204 Latency(us) 00:18:21.204 Device Information : IOPS MiB/s Average min max 00:18:21.204 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39940.97 156.02 3204.56 930.63 7549.76 00:18:21.204 ======================================================== 00:18:21.204 Total : 39940.97 156.02 3204.56 930.63 7549.76 00:18:21.204 00:18:21.204 [2024-12-16 05:46:54.800457] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:21.204 05:46:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:21.204 [2024-12-16 05:46:55.024531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.460 Initializing NVMe Controllers 00:18:26.460 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:26.460 Initialization complete. Launching workers. 00:18:26.460 ======================================================== 00:18:26.460 Latency(us) 00:18:26.460 Device Information : IOPS MiB/s Average min max 00:18:26.460 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.29 62.72 7976.84 6921.72 8972.27 00:18:26.460 ======================================================== 00:18:26.460 Total : 16057.29 62.72 7976.84 6921.72 8972.27 00:18:26.460 00:18:26.460 [2024-12-16 05:47:00.065027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.460 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:26.460 [2024-12-16 05:47:00.261988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.718 [2024-12-16 05:47:05.335129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.718 Initializing NVMe Controllers 00:18:31.718 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:31.718 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:31.718 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:31.718 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:31.718 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:31.718 Initialization complete. Launching workers. 00:18:31.718 Starting thread on core 2 00:18:31.718 Starting thread on core 3 00:18:31.718 Starting thread on core 1 00:18:31.718 05:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:31.976 [2024-12-16 05:47:05.611224] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:35.257 [2024-12-16 05:47:08.740083] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:35.257 Initializing NVMe Controllers 00:18:35.257 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.257 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.257 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:35.257 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:35.257 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:35.257 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:35.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:35.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:35.257 Initialization complete. Launching workers. 00:18:35.257 Starting thread on core 1 with urgent priority queue 00:18:35.257 Starting thread on core 2 with urgent priority queue 00:18:35.257 Starting thread on core 3 with urgent priority queue 00:18:35.257 Starting thread on core 0 with urgent priority queue 00:18:35.257 SPDK bdev Controller (SPDK1 ) core 0: 7608.33 IO/s 13.14 secs/100000 ios 00:18:35.257 SPDK bdev Controller (SPDK1 ) core 1: 8077.33 IO/s 12.38 secs/100000 ios 00:18:35.257 SPDK bdev Controller (SPDK1 ) core 2: 9030.00 IO/s 11.07 secs/100000 ios 00:18:35.257 SPDK bdev Controller (SPDK1 ) core 3: 9577.33 IO/s 10.44 secs/100000 ios 00:18:35.257 ======================================================== 00:18:35.257 00:18:35.257 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:35.257 [2024-12-16 05:47:09.006270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:35.257 Initializing NVMe Controllers 00:18:35.257 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.257 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:35.257 Namespace ID: 1 size: 0GB 00:18:35.257 Initialization complete. 00:18:35.257 INFO: using host memory buffer for IO 00:18:35.257 Hello world! 00:18:35.257 [2024-12-16 05:47:09.042489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:35.257 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:35.515 [2024-12-16 05:47:09.310240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:36.886 Initializing NVMe Controllers 00:18:36.887 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.887 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:36.887 Initialization complete. Launching workers. 00:18:36.887 submit (in ns) avg, min, max = 7743.6, 3128.6, 3999148.6 00:18:36.887 complete (in ns) avg, min, max = 19668.5, 1711.4, 5990895.2 00:18:36.887 00:18:36.887 Submit histogram 00:18:36.887 ================ 00:18:36.887 Range in us Cumulative Count 00:18:36.887 3.124 - 3.139: 0.0182% ( 3) 00:18:36.887 3.139 - 3.154: 0.0364% ( 3) 00:18:36.887 3.154 - 3.170: 0.0789% ( 7) 00:18:36.887 3.170 - 3.185: 0.1517% ( 12) 00:18:36.887 3.185 - 3.200: 0.7223% ( 94) 00:18:36.887 3.200 - 3.215: 2.7739% ( 338) 00:18:36.887 3.215 - 3.230: 7.5873% ( 793) 00:18:36.887 3.230 - 3.246: 13.5903% ( 989) 00:18:36.887 3.246 - 3.261: 20.2124% ( 1091) 00:18:36.887 3.261 - 3.276: 27.7936% ( 1249) 00:18:36.887 3.276 - 3.291: 33.9302% ( 1011) 00:18:36.887 3.291 - 3.307: 38.8103% ( 804) 00:18:36.887 3.307 - 3.322: 43.6844% ( 803) 00:18:36.887 3.322 - 3.337: 48.4734% ( 789) 00:18:36.887 3.337 - 3.352: 51.9575% ( 574) 00:18:36.887 3.352 - 3.368: 57.5296% ( 918) 00:18:36.887 3.368 - 3.383: 64.0243% ( 1070) 00:18:36.887 3.383 - 3.398: 69.3050% ( 870) 00:18:36.887 3.398 - 3.413: 75.1381% ( 961) 00:18:36.887 3.413 - 3.429: 79.8543% ( 777) 00:18:36.887 3.429 - 3.444: 82.9803% ( 515) 00:18:36.887 3.444 - 3.459: 84.9165% ( 319) 00:18:36.887 3.459 - 3.474: 85.7663% ( 140) 00:18:36.887 3.474 - 3.490: 86.2762% ( 84) 00:18:36.887 3.490 - 3.505: 86.7496% ( 78) 00:18:36.887 3.505 - 3.520: 87.4234% ( 111) 00:18:36.887 3.520 - 3.535: 88.2185% ( 131) 00:18:36.887 3.535 - 3.550: 89.1533% ( 154) 00:18:36.887 3.550 - 3.566: 90.2458% ( 180) 00:18:36.887 3.566 - 3.581: 91.2352% ( 163) 00:18:36.887 3.581 - 3.596: 92.1578% ( 152) 00:18:36.887 3.596 - 3.611: 93.0319% ( 144) 00:18:36.887 3.611 - 3.627: 93.9059% ( 144) 00:18:36.887 3.627 - 3.642: 94.8285% ( 152) 00:18:36.887 3.642 - 3.657: 95.6297% ( 132) 00:18:36.887 3.657 - 3.672: 96.3156% ( 113) 00:18:36.887 3.672 - 3.688: 96.9287% ( 101) 00:18:36.887 3.688 - 3.703: 97.3414% ( 68) 00:18:36.887 3.703 - 3.718: 97.7299% ( 64) 00:18:36.887 3.718 - 3.733: 98.0516% ( 53) 00:18:36.887 3.733 - 3.749: 98.3794% ( 54) 00:18:36.887 3.749 - 3.764: 98.5918% ( 35) 00:18:36.887 3.764 - 3.779: 98.6950% ( 17) 00:18:36.887 3.779 - 3.794: 98.7618% ( 11) 00:18:36.887 3.794 - 3.810: 98.8407% ( 13) 00:18:36.887 3.810 - 3.825: 98.9560% ( 19) 00:18:36.887 3.825 - 3.840: 99.0228% ( 11) 00:18:36.887 3.840 - 3.855: 99.0895% ( 11) 00:18:36.887 3.855 - 3.870: 99.1259% ( 6) 00:18:36.887 3.870 - 3.886: 99.1866% ( 10) 00:18:36.887 3.886 - 3.901: 99.2473% ( 10) 00:18:36.887 3.901 - 3.931: 99.3202% ( 12) 00:18:36.887 3.931 - 3.962: 99.3869% ( 11) 00:18:36.887 3.962 - 3.992: 99.4173% ( 5) 00:18:36.887 3.992 - 4.023: 99.4294% ( 2) 00:18:36.887 4.023 - 4.053: 99.4476% ( 3) 00:18:36.887 4.053 - 4.084: 99.4659% ( 3) 00:18:36.887 4.084 - 4.114: 99.4719% ( 1) 00:18:36.887 4.114 - 4.145: 99.4780% ( 1) 00:18:36.887 4.145 - 4.175: 99.4962% ( 3) 00:18:36.887 4.175 - 4.206: 99.5023% ( 1) 00:18:36.887 4.206 - 4.236: 99.5266% ( 4) 00:18:36.887 4.236 - 4.267: 99.5326% ( 1) 00:18:36.887 4.267 - 4.297: 99.5387% ( 1) 00:18:36.887 4.450 - 4.480: 99.5448% ( 1) 00:18:36.887 5.242 - 5.272: 99.5508% ( 1) 00:18:36.887 5.333 - 5.364: 99.5569% ( 1) 00:18:36.887 5.425 - 5.455: 99.5690% ( 2) 00:18:36.887 5.486 - 5.516: 99.5751% ( 1) 00:18:36.887 5.516 - 5.547: 99.5812% ( 1) 00:18:36.887 5.608 - 5.638: 99.5873% ( 1) 00:18:36.887 5.638 - 5.669: 99.5994% ( 2) 00:18:36.887 5.669 - 5.699: 99.6055% ( 1) 00:18:36.887 5.730 - 5.760: 99.6115% ( 1) 00:18:36.887 5.790 - 5.821: 99.6237% ( 2) 00:18:36.887 5.882 - 5.912: 99.6297% ( 1) 00:18:36.887 5.912 - 5.943: 99.6358% ( 1) 00:18:36.887 5.943 - 5.973: 99.6419% ( 1) 00:18:36.887 6.004 - 6.034: 99.6540% ( 2) 00:18:36.887 6.034 - 6.065: 99.6601% ( 1) 00:18:36.887 6.065 - 6.095: 99.6662% ( 1) 00:18:36.887 6.156 - 6.187: 99.6722% ( 1) 00:18:36.887 6.217 - 6.248: 99.6783% ( 1) 00:18:36.887 6.248 - 6.278: 99.6844% ( 1) 00:18:36.887 6.309 - 6.339: 99.6904% ( 1) 00:18:36.887 6.491 - 6.522: 99.6965% ( 1) 00:18:36.887 6.674 - 6.705: 99.7086% ( 2) 00:18:36.887 6.735 - 6.766: 99.7208% ( 2) 00:18:36.887 6.796 - 6.827: 99.7329% ( 2) 00:18:36.887 6.827 - 6.857: 99.7451% ( 2) 00:18:36.887 6.857 - 6.888: 99.7572% ( 2) 00:18:36.887 6.979 - 7.010: 99.7633% ( 1) 00:18:36.887 7.192 - 7.223: 99.7754% ( 2) 00:18:36.887 7.345 - 7.375: 99.7815% ( 1) 00:18:36.887 7.650 - 7.680: 99.7876% ( 1) 00:18:36.887 7.741 - 7.771: 99.7997% ( 2) 00:18:36.887 7.771 - 7.802: 99.8058% ( 1) 00:18:36.887 7.802 - 7.863: 99.8179% ( 2) 00:18:36.887 7.863 - 7.924: 99.8240% ( 1) 00:18:36.887 7.985 - 8.046: 99.8300% ( 1) 00:18:36.887 8.107 - 8.168: 99.8361% ( 1) 00:18:36.887 8.472 - 8.533: 99.8422% ( 1) 00:18:36.887 8.777 - 8.838: 99.8483% ( 1) 00:18:36.887 9.265 - 9.326: 99.8543% ( 1) 00:18:36.887 11.886 - 11.947: 99.8604% ( 1) 00:18:36.887 13.592 - 13.653: 99.8665% ( 1) 00:18:36.887 15.543 - 15.604: 99.8786% ( 2) 00:18:36.887 18.895 - 19.017: 99.8847% ( 1) 00:18:36.887 19.139 - 19.261: 99.8907% ( 1) 00:18:36.887 3994.575 - 4025.783: 100.0000% ( 18) 00:18:36.887 00:18:36.887 Complete histogram 00:18:36.887 ================== 00:18:36.887 Range in us Cumulative Count 00:18:36.887 1.707 - 1.714: 0.0182% ( 3) 00:18:36.887 1.714 - 1.722: 0.1032% ( 14) 00:18:36.887 1.722 - 1.730: 0.1396% ( 6) 00:18:36.887 1.730 - 1.737: 0.1457% ( 1) 00:18:36.887 1.737 - 1.745: 0.1700% ( 4) 00:18:36.887 1.745 - 1.752: 0.3885% ( 36) 00:18:36.887 1.752 - 1.760: 2.8710% ( 409) 00:18:36.887 1.760 - 1.768: 11.1320% ( 1361) 00:18:36.887 1.768 - 1.775: 20.1032% ( 1478) 00:18:36.887 1.775 - 1.783: 24.2549% ( 684) 00:18:36.887 1.783 - 1.790: 25.8816% ( 268) 00:18:36.887 1.790 - 1.798: 26.8832% ( 165) 00:18:36.887 1.798 - 1.806: 28.8558% ( 325) 00:18:36.887 1.806 - 1.813: 40.4917% ( 1917) 00:18:36.887 1.813 - 1.821: 64.6009% ( 3972) 00:18:36.887 1.821 - 1.829: 83.6540% ( 3139) 00:18:36.887 1.829 - 1.836: 91.4476% ( 1284) 00:18:36.887 1.836 - 1.844: 93.8149% ( 390) 00:18:36.887 1.844 - 1.851: 95.1077% ( 213) 00:18:36.887 1.851 - 1.859: 95.8968% ( 130) 00:18:36.887 1.859 - 1.867: 96.1882% ( 48) 00:18:36.887 1.867 - 1.874: 96.4917% ( 50) 00:18:36.887 1.874 - 1.882: 96.8983% ( 67) 00:18:36.887 1.882 - 1.890: 97.2625% ( 60) 00:18:36.887 1.890 - 1.897: 97.4143% ( 25) 00:18:36.887 1.897 - 1.905: 97.5660% ( 25) 00:18:36.887 1.905 - 1.912: 97.6813% ( 19) 00:18:36.887 1.912 - 1.920: 97.7542% ( 12) 00:18:36.887 1.920 - 1.928: 97.8634% ( 18) 00:18:36.887 1.928 - 1.935: 98.0030% ( 23) 00:18:36.887 1.935 - 1.943: 98.2701% ( 44) 00:18:36.887 1.943 - 1.950: 98.4947% ( 37) 00:18:36.887 1.950 - 1.966: 98.5797% ( 14) 00:18:36.887 1.966 - 1.981: 98.5979% ( 3) 00:18:36.887 1.981 - 1.996: 98.6100% ( 2) 00:18:36.887 1.996 - 2.011: 98.6161% ( 1) 00:18:36.887 2.011 - 2.027: 98.6282% ( 2) 00:18:36.887 2.042 - 2.057: 98.6343% ( 1) 00:18:36.887 2.072 - 2.088: 98.6586% ( 4) 00:18:36.887 2.118 - 2.133: 98.6646% ( 1) 00:18:36.887 2.133 - 2.149: 98.7132% ( 8) 00:18:36.887 2.149 - 2.164: 98.7436% ( 5) 00:18:36.887 2.164 - 2.179: 98.7678% ( 4) 00:18:36.887 2.179 - 2.194: 98.7739% ( 1) 00:18:36.887 2.194 - 2.210: 98.8771% ( 17) 00:18:36.887 2.210 - 2.225: 99.2291% ( 58) 00:18:36.887 2.225 - 2.240: 99.3080% ( 13) 00:18:36.887 2.240 - 2.255: 99.3323% ( 4) 00:18:36.887 2.255 - 2.270: 99.3505% ( 3) 00:18:36.887 2.286 - 2.301: 99.3627% ( 2) 00:18:36.887 2.301 - 2.316: 99.3687% ( 1) 00:18:36.887 3.825 - 3.840: 99.3748% ( 1) 00:18:36.887 3.870 - 3.886: 99.3809% ( 1) 00:18:36.887 3.901 - 3.931: 99.3869% ( 1) 00:18:36.887 3.931 - 3.9[2024-12-16 05:47:10.328433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:36.887 62: 99.3930% ( 1) 00:18:36.887 3.962 - 3.992: 99.4052% ( 2) 00:18:36.887 4.053 - 4.084: 99.4112% ( 1) 00:18:36.887 4.145 - 4.175: 99.4173% ( 1) 00:18:36.887 4.206 - 4.236: 99.4234% ( 1) 00:18:36.887 4.236 - 4.267: 99.4294% ( 1) 00:18:36.887 4.480 - 4.510: 99.4416% ( 2) 00:18:36.887 4.510 - 4.541: 99.4476% ( 1) 00:18:36.887 4.571 - 4.602: 99.4537% ( 1) 00:18:36.887 4.693 - 4.724: 99.4598% ( 1) 00:18:36.888 4.724 - 4.754: 99.4659% ( 1) 00:18:36.888 4.907 - 4.937: 99.4719% ( 1) 00:18:36.888 5.059 - 5.090: 99.4780% ( 1) 00:18:36.888 5.211 - 5.242: 99.4841% ( 1) 00:18:36.888 5.333 - 5.364: 99.4901% ( 1) 00:18:36.888 5.394 - 5.425: 99.5023% ( 2) 00:18:36.888 5.455 - 5.486: 99.5083% ( 1) 00:18:36.888 5.699 - 5.730: 99.5144% ( 1) 00:18:36.888 5.882 - 5.912: 99.5205% ( 1) 00:18:36.888 5.973 - 6.004: 99.5266% ( 1) 00:18:36.888 6.309 - 6.339: 99.5326% ( 1) 00:18:36.888 7.406 - 7.436: 99.5387% ( 1) 00:18:36.888 12.373 - 12.434: 99.5448% ( 1) 00:18:36.888 14.324 - 14.385: 99.5508% ( 1) 00:18:36.888 2168.930 - 2184.533: 99.5569% ( 1) 00:18:36.888 2231.345 - 2246.949: 99.5630% ( 1) 00:18:36.888 3042.743 - 3058.347: 99.5690% ( 1) 00:18:36.888 3994.575 - 4025.783: 99.9879% ( 69) 00:18:36.888 4962.011 - 4993.219: 99.9939% ( 1) 00:18:36.888 5960.655 - 5991.863: 100.0000% ( 1) 00:18:36.888 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:36.888 [ 00:18:36.888 { 00:18:36.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:36.888 "subtype": "Discovery", 00:18:36.888 "listen_addresses": [], 00:18:36.888 "allow_any_host": true, 00:18:36.888 "hosts": [] 00:18:36.888 }, 00:18:36.888 { 00:18:36.888 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:36.888 "subtype": "NVMe", 00:18:36.888 "listen_addresses": [ 00:18:36.888 { 00:18:36.888 "trtype": "VFIOUSER", 00:18:36.888 "adrfam": "IPv4", 00:18:36.888 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:36.888 "trsvcid": "0" 00:18:36.888 } 00:18:36.888 ], 00:18:36.888 "allow_any_host": true, 00:18:36.888 "hosts": [], 00:18:36.888 "serial_number": "SPDK1", 00:18:36.888 "model_number": "SPDK bdev Controller", 00:18:36.888 "max_namespaces": 32, 00:18:36.888 "min_cntlid": 1, 00:18:36.888 "max_cntlid": 65519, 00:18:36.888 "namespaces": [ 00:18:36.888 { 00:18:36.888 "nsid": 1, 00:18:36.888 "bdev_name": "Malloc1", 00:18:36.888 "name": "Malloc1", 00:18:36.888 "nguid": "620D6963E5D449B78FA91A3F64BD26E8", 00:18:36.888 "uuid": "620d6963-e5d4-49b7-8fa9-1a3f64bd26e8" 00:18:36.888 } 00:18:36.888 ] 00:18:36.888 }, 00:18:36.888 { 00:18:36.888 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:36.888 "subtype": "NVMe", 00:18:36.888 "listen_addresses": [ 00:18:36.888 { 00:18:36.888 "trtype": "VFIOUSER", 00:18:36.888 "adrfam": "IPv4", 00:18:36.888 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:36.888 "trsvcid": "0" 00:18:36.888 } 00:18:36.888 ], 00:18:36.888 "allow_any_host": true, 00:18:36.888 "hosts": [], 00:18:36.888 "serial_number": "SPDK2", 00:18:36.888 "model_number": "SPDK bdev Controller", 00:18:36.888 "max_namespaces": 32, 00:18:36.888 "min_cntlid": 1, 00:18:36.888 "max_cntlid": 65519, 00:18:36.888 "namespaces": [ 00:18:36.888 { 00:18:36.888 "nsid": 1, 00:18:36.888 "bdev_name": "Malloc2", 00:18:36.888 "name": "Malloc2", 00:18:36.888 "nguid": "DFE8F40283794E3BA3938FF85F679CF8", 00:18:36.888 "uuid": "dfe8f402-8379-4e3b-a393-8ff85f679cf8" 00:18:36.888 } 00:18:36.888 ] 00:18:36.888 } 00:18:36.888 ] 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3343883 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:36.888 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:36.888 [2024-12-16 05:47:10.711275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:37.145 Malloc3 00:18:37.145 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:37.145 [2024-12-16 05:47:10.956058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:37.145 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:37.145 Asynchronous Event Request test 00:18:37.145 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:37.145 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:37.145 Registering asynchronous event callbacks... 00:18:37.145 Starting namespace attribute notice tests for all controllers... 00:18:37.145 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:37.146 aer_cb - Changed Namespace 00:18:37.146 Cleaning up... 00:18:37.403 [ 00:18:37.403 { 00:18:37.403 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:37.403 "subtype": "Discovery", 00:18:37.403 "listen_addresses": [], 00:18:37.403 "allow_any_host": true, 00:18:37.403 "hosts": [] 00:18:37.403 }, 00:18:37.403 { 00:18:37.403 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:37.403 "subtype": "NVMe", 00:18:37.403 "listen_addresses": [ 00:18:37.403 { 00:18:37.403 "trtype": "VFIOUSER", 00:18:37.403 "adrfam": "IPv4", 00:18:37.403 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:37.403 "trsvcid": "0" 00:18:37.403 } 00:18:37.403 ], 00:18:37.403 "allow_any_host": true, 00:18:37.403 "hosts": [], 00:18:37.403 "serial_number": "SPDK1", 00:18:37.403 "model_number": "SPDK bdev Controller", 00:18:37.403 "max_namespaces": 32, 00:18:37.403 "min_cntlid": 1, 00:18:37.403 "max_cntlid": 65519, 00:18:37.403 "namespaces": [ 00:18:37.403 { 00:18:37.403 "nsid": 1, 00:18:37.403 "bdev_name": "Malloc1", 00:18:37.403 "name": "Malloc1", 00:18:37.403 "nguid": "620D6963E5D449B78FA91A3F64BD26E8", 00:18:37.403 "uuid": "620d6963-e5d4-49b7-8fa9-1a3f64bd26e8" 00:18:37.403 }, 00:18:37.403 { 00:18:37.403 "nsid": 2, 00:18:37.403 "bdev_name": "Malloc3", 00:18:37.403 "name": "Malloc3", 00:18:37.403 "nguid": "8F60D37914A442FDBF910BCB248BDF7D", 00:18:37.403 "uuid": "8f60d379-14a4-42fd-bf91-0bcb248bdf7d" 00:18:37.403 } 00:18:37.403 ] 00:18:37.403 }, 00:18:37.403 { 00:18:37.403 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:37.403 "subtype": "NVMe", 00:18:37.403 "listen_addresses": [ 00:18:37.403 { 00:18:37.403 "trtype": "VFIOUSER", 00:18:37.403 "adrfam": "IPv4", 00:18:37.403 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:37.403 "trsvcid": "0" 00:18:37.403 } 00:18:37.403 ], 00:18:37.403 "allow_any_host": true, 00:18:37.403 "hosts": [], 00:18:37.403 "serial_number": "SPDK2", 00:18:37.403 "model_number": "SPDK bdev Controller", 00:18:37.403 "max_namespaces": 32, 00:18:37.403 "min_cntlid": 1, 00:18:37.403 "max_cntlid": 65519, 00:18:37.403 "namespaces": [ 00:18:37.403 { 00:18:37.403 "nsid": 1, 00:18:37.403 "bdev_name": "Malloc2", 00:18:37.403 "name": "Malloc2", 00:18:37.403 "nguid": "DFE8F40283794E3BA3938FF85F679CF8", 00:18:37.403 "uuid": "dfe8f402-8379-4e3b-a393-8ff85f679cf8" 00:18:37.403 } 00:18:37.403 ] 00:18:37.403 } 00:18:37.403 ] 00:18:37.403 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3343883 00:18:37.403 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:37.403 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:37.403 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:37.403 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:37.403 [2024-12-16 05:47:11.209480] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:37.403 [2024-12-16 05:47:11.209514] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343961 ] 00:18:37.403 [2024-12-16 05:47:11.240010] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:37.403 [2024-12-16 05:47:11.250090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:37.403 [2024-12-16 05:47:11.250111] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f847ec17000 00:18:37.403 [2024-12-16 05:47:11.251094] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.403 [2024-12-16 05:47:11.252097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.403 [2024-12-16 05:47:11.253103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.403 [2024-12-16 05:47:11.254105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.403 [2024-12-16 05:47:11.255113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.403 [2024-12-16 05:47:11.256114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.403 [2024-12-16 05:47:11.257121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:37.403 [2024-12-16 05:47:11.258131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:37.663 [2024-12-16 05:47:11.259137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:37.663 [2024-12-16 05:47:11.259159] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f847d921000 00:18:37.663 [2024-12-16 05:47:11.260075] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:37.663 [2024-12-16 05:47:11.268400] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:37.663 [2024-12-16 05:47:11.268428] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:37.663 [2024-12-16 05:47:11.273500] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:37.663 [2024-12-16 05:47:11.273538] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:37.663 [2024-12-16 05:47:11.273608] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:37.663 [2024-12-16 05:47:11.273623] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:37.663 [2024-12-16 05:47:11.273628] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:37.663 [2024-12-16 05:47:11.274504] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:37.663 [2024-12-16 05:47:11.274514] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:37.663 [2024-12-16 05:47:11.274520] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:37.663 [2024-12-16 05:47:11.275510] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:37.663 [2024-12-16 05:47:11.275519] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:37.663 [2024-12-16 05:47:11.275525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:37.663 [2024-12-16 05:47:11.276522] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:37.663 [2024-12-16 05:47:11.276532] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:37.663 [2024-12-16 05:47:11.277529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:37.663 [2024-12-16 05:47:11.277538] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:37.663 [2024-12-16 05:47:11.277543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:37.663 [2024-12-16 05:47:11.277548] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:37.663 [2024-12-16 05:47:11.277654] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:37.663 [2024-12-16 05:47:11.277658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:37.663 [2024-12-16 05:47:11.277663] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:37.663 [2024-12-16 05:47:11.278540] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:37.663 [2024-12-16 05:47:11.279544] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:37.663 [2024-12-16 05:47:11.280549] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:37.663 [2024-12-16 05:47:11.281546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:37.663 [2024-12-16 05:47:11.281586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:37.663 [2024-12-16 05:47:11.282560] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:37.663 [2024-12-16 05:47:11.282569] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:37.663 [2024-12-16 05:47:11.282574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.282590] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:37.663 [2024-12-16 05:47:11.282597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.282608] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.663 [2024-12-16 05:47:11.282613] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.663 [2024-12-16 05:47:11.282617] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.663 [2024-12-16 05:47:11.282627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.663 [2024-12-16 05:47:11.289858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:37.663 [2024-12-16 05:47:11.289870] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:37.663 [2024-12-16 05:47:11.289875] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:37.663 [2024-12-16 05:47:11.289878] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:37.663 [2024-12-16 05:47:11.289882] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:37.663 [2024-12-16 05:47:11.289887] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:37.663 [2024-12-16 05:47:11.289891] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:37.663 [2024-12-16 05:47:11.289895] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.289901] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.289911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:37.663 [2024-12-16 05:47:11.297853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:37.663 [2024-12-16 05:47:11.297866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.663 [2024-12-16 05:47:11.297873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.663 [2024-12-16 05:47:11.297881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.663 [2024-12-16 05:47:11.297889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.663 [2024-12-16 05:47:11.297896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.297904] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.297913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:37.663 [2024-12-16 05:47:11.305853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:37.663 [2024-12-16 05:47:11.305862] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:37.663 [2024-12-16 05:47:11.305866] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.305872] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.305879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.305888] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:37.663 [2024-12-16 05:47:11.313854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:37.663 [2024-12-16 05:47:11.313907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.313914] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:37.663 [2024-12-16 05:47:11.313921] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:37.663 [2024-12-16 05:47:11.313925] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:37.663 [2024-12-16 05:47:11.313928] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.663 [2024-12-16 05:47:11.313933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:37.663 [2024-12-16 05:47:11.320853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:37.663 [2024-12-16 05:47:11.320865] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:37.664 [2024-12-16 05:47:11.320877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.320883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.320889] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.664 [2024-12-16 05:47:11.320893] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.664 [2024-12-16 05:47:11.320896] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.664 [2024-12-16 05:47:11.320901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.328853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.328866] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.328875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.328881] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:37.664 [2024-12-16 05:47:11.328885] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.664 [2024-12-16 05:47:11.328888] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.664 [2024-12-16 05:47:11.328893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.336854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.336864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.336870] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.336877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.336883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.336887] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.336892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.336896] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:37.664 [2024-12-16 05:47:11.336900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:37.664 [2024-12-16 05:47:11.336904] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:37.664 [2024-12-16 05:47:11.336919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.344854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.344867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.352854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.352866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.360853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.360866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.368852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.368869] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:37.664 [2024-12-16 05:47:11.368874] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:37.664 [2024-12-16 05:47:11.368877] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:37.664 [2024-12-16 05:47:11.368882] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:37.664 [2024-12-16 05:47:11.368885] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:37.664 [2024-12-16 05:47:11.368891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:37.664 [2024-12-16 05:47:11.368897] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:37.664 [2024-12-16 05:47:11.368901] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:37.664 [2024-12-16 05:47:11.368903] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.664 [2024-12-16 05:47:11.368908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.368915] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:37.664 [2024-12-16 05:47:11.368918] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:37.664 [2024-12-16 05:47:11.368921] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.664 [2024-12-16 05:47:11.368927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.368933] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:37.664 [2024-12-16 05:47:11.368936] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:37.664 [2024-12-16 05:47:11.368939] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:37.664 [2024-12-16 05:47:11.368945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:37.664 [2024-12-16 05:47:11.376853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.376867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.376876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:37.664 [2024-12-16 05:47:11.376882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:37.664 ===================================================== 00:18:37.664 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:37.664 ===================================================== 00:18:37.664 Controller Capabilities/Features 00:18:37.664 ================================ 00:18:37.664 Vendor ID: 4e58 00:18:37.664 Subsystem Vendor ID: 4e58 00:18:37.664 Serial Number: SPDK2 00:18:37.664 Model Number: SPDK bdev Controller 00:18:37.664 Firmware Version: 24.09.1 00:18:37.664 Recommended Arb Burst: 6 00:18:37.664 IEEE OUI Identifier: 8d 6b 50 00:18:37.664 Multi-path I/O 00:18:37.664 May have multiple subsystem ports: Yes 00:18:37.664 May have multiple controllers: Yes 00:18:37.664 Associated with SR-IOV VF: No 00:18:37.664 Max Data Transfer Size: 131072 00:18:37.664 Max Number of Namespaces: 32 00:18:37.664 Max Number of I/O Queues: 127 00:18:37.664 NVMe Specification Version (VS): 1.3 00:18:37.664 NVMe Specification Version (Identify): 1.3 00:18:37.664 Maximum Queue Entries: 256 00:18:37.664 Contiguous Queues Required: Yes 00:18:37.664 Arbitration Mechanisms Supported 00:18:37.664 Weighted Round Robin: Not Supported 00:18:37.664 Vendor Specific: Not Supported 00:18:37.664 Reset Timeout: 15000 ms 00:18:37.664 Doorbell Stride: 4 bytes 00:18:37.664 NVM Subsystem Reset: Not Supported 00:18:37.664 Command Sets Supported 00:18:37.664 NVM Command Set: Supported 00:18:37.664 Boot Partition: Not Supported 00:18:37.664 Memory Page Size Minimum: 4096 bytes 00:18:37.664 Memory Page Size Maximum: 4096 bytes 00:18:37.664 Persistent Memory Region: Not Supported 00:18:37.664 Optional Asynchronous Events Supported 00:18:37.664 Namespace Attribute Notices: Supported 00:18:37.664 Firmware Activation Notices: Not Supported 00:18:37.664 ANA Change Notices: Not Supported 00:18:37.664 PLE Aggregate Log Change Notices: Not Supported 00:18:37.664 LBA Status Info Alert Notices: Not Supported 00:18:37.664 EGE Aggregate Log Change Notices: Not Supported 00:18:37.664 Normal NVM Subsystem Shutdown event: Not Supported 00:18:37.664 Zone Descriptor Change Notices: Not Supported 00:18:37.664 Discovery Log Change Notices: Not Supported 00:18:37.664 Controller Attributes 00:18:37.664 128-bit Host Identifier: Supported 00:18:37.664 Non-Operational Permissive Mode: Not Supported 00:18:37.664 NVM Sets: Not Supported 00:18:37.664 Read Recovery Levels: Not Supported 00:18:37.664 Endurance Groups: Not Supported 00:18:37.664 Predictable Latency Mode: Not Supported 00:18:37.664 Traffic Based Keep ALive: Not Supported 00:18:37.664 Namespace Granularity: Not Supported 00:18:37.664 SQ Associations: Not Supported 00:18:37.664 UUID List: Not Supported 00:18:37.664 Multi-Domain Subsystem: Not Supported 00:18:37.664 Fixed Capacity Management: Not Supported 00:18:37.665 Variable Capacity Management: Not Supported 00:18:37.665 Delete Endurance Group: Not Supported 00:18:37.665 Delete NVM Set: Not Supported 00:18:37.665 Extended LBA Formats Supported: Not Supported 00:18:37.665 Flexible Data Placement Supported: Not Supported 00:18:37.665 00:18:37.665 Controller Memory Buffer Support 00:18:37.665 ================================ 00:18:37.665 Supported: No 00:18:37.665 00:18:37.665 Persistent Memory Region Support 00:18:37.665 ================================ 00:18:37.665 Supported: No 00:18:37.665 00:18:37.665 Admin Command Set Attributes 00:18:37.665 ============================ 00:18:37.665 Security Send/Receive: Not Supported 00:18:37.665 Format NVM: Not Supported 00:18:37.665 Firmware Activate/Download: Not Supported 00:18:37.665 Namespace Management: Not Supported 00:18:37.665 Device Self-Test: Not Supported 00:18:37.665 Directives: Not Supported 00:18:37.665 NVMe-MI: Not Supported 00:18:37.665 Virtualization Management: Not Supported 00:18:37.665 Doorbell Buffer Config: Not Supported 00:18:37.665 Get LBA Status Capability: Not Supported 00:18:37.665 Command & Feature Lockdown Capability: Not Supported 00:18:37.665 Abort Command Limit: 4 00:18:37.665 Async Event Request Limit: 4 00:18:37.665 Number of Firmware Slots: N/A 00:18:37.665 Firmware Slot 1 Read-Only: N/A 00:18:37.665 Firmware Activation Without Reset: N/A 00:18:37.665 Multiple Update Detection Support: N/A 00:18:37.665 Firmware Update Granularity: No Information Provided 00:18:37.665 Per-Namespace SMART Log: No 00:18:37.665 Asymmetric Namespace Access Log Page: Not Supported 00:18:37.665 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:37.665 Command Effects Log Page: Supported 00:18:37.665 Get Log Page Extended Data: Supported 00:18:37.665 Telemetry Log Pages: Not Supported 00:18:37.665 Persistent Event Log Pages: Not Supported 00:18:37.665 Supported Log Pages Log Page: May Support 00:18:37.665 Commands Supported & Effects Log Page: Not Supported 00:18:37.665 Feature Identifiers & Effects Log Page:May Support 00:18:37.665 NVMe-MI Commands & Effects Log Page: May Support 00:18:37.665 Data Area 4 for Telemetry Log: Not Supported 00:18:37.665 Error Log Page Entries Supported: 128 00:18:37.665 Keep Alive: Supported 00:18:37.665 Keep Alive Granularity: 10000 ms 00:18:37.665 00:18:37.665 NVM Command Set Attributes 00:18:37.665 ========================== 00:18:37.665 Submission Queue Entry Size 00:18:37.665 Max: 64 00:18:37.665 Min: 64 00:18:37.665 Completion Queue Entry Size 00:18:37.665 Max: 16 00:18:37.665 Min: 16 00:18:37.665 Number of Namespaces: 32 00:18:37.665 Compare Command: Supported 00:18:37.665 Write Uncorrectable Command: Not Supported 00:18:37.665 Dataset Management Command: Supported 00:18:37.665 Write Zeroes Command: Supported 00:18:37.665 Set Features Save Field: Not Supported 00:18:37.665 Reservations: Not Supported 00:18:37.665 Timestamp: Not Supported 00:18:37.665 Copy: Supported 00:18:37.665 Volatile Write Cache: Present 00:18:37.665 Atomic Write Unit (Normal): 1 00:18:37.665 Atomic Write Unit (PFail): 1 00:18:37.665 Atomic Compare & Write Unit: 1 00:18:37.665 Fused Compare & Write: Supported 00:18:37.665 Scatter-Gather List 00:18:37.665 SGL Command Set: Supported (Dword aligned) 00:18:37.665 SGL Keyed: Not Supported 00:18:37.665 SGL Bit Bucket Descriptor: Not Supported 00:18:37.665 SGL Metadata Pointer: Not Supported 00:18:37.665 Oversized SGL: Not Supported 00:18:37.665 SGL Metadata Address: Not Supported 00:18:37.665 SGL Offset: Not Supported 00:18:37.665 Transport SGL Data Block: Not Supported 00:18:37.665 Replay Protected Memory Block: Not Supported 00:18:37.665 00:18:37.665 Firmware Slot Information 00:18:37.665 ========================= 00:18:37.665 Active slot: 1 00:18:37.665 Slot 1 Firmware Revision: 24.09.1 00:18:37.665 00:18:37.665 00:18:37.665 Commands Supported and Effects 00:18:37.665 ============================== 00:18:37.665 Admin Commands 00:18:37.665 -------------- 00:18:37.665 Get Log Page (02h): Supported 00:18:37.665 Identify (06h): Supported 00:18:37.665 Abort (08h): Supported 00:18:37.665 Set Features (09h): Supported 00:18:37.665 Get Features (0Ah): Supported 00:18:37.665 Asynchronous Event Request (0Ch): Supported 00:18:37.665 Keep Alive (18h): Supported 00:18:37.665 I/O Commands 00:18:37.665 ------------ 00:18:37.665 Flush (00h): Supported LBA-Change 00:18:37.665 Write (01h): Supported LBA-Change 00:18:37.665 Read (02h): Supported 00:18:37.665 Compare (05h): Supported 00:18:37.665 Write Zeroes (08h): Supported LBA-Change 00:18:37.665 Dataset Management (09h): Supported LBA-Change 00:18:37.665 Copy (19h): Supported LBA-Change 00:18:37.665 00:18:37.665 Error Log 00:18:37.665 ========= 00:18:37.665 00:18:37.665 Arbitration 00:18:37.665 =========== 00:18:37.665 Arbitration Burst: 1 00:18:37.665 00:18:37.665 Power Management 00:18:37.665 ================ 00:18:37.665 Number of Power States: 1 00:18:37.665 Current Power State: Power State #0 00:18:37.665 Power State #0: 00:18:37.665 Max Power: 0.00 W 00:18:37.665 Non-Operational State: Operational 00:18:37.665 Entry Latency: Not Reported 00:18:37.665 Exit Latency: Not Reported 00:18:37.665 Relative Read Throughput: 0 00:18:37.665 Relative Read Latency: 0 00:18:37.665 Relative Write Throughput: 0 00:18:37.665 Relative Write Latency: 0 00:18:37.665 Idle Power: Not Reported 00:18:37.665 Active Power: Not Reported 00:18:37.665 Non-Operational Permissive Mode: Not Supported 00:18:37.665 00:18:37.665 Health Information 00:18:37.665 ================== 00:18:37.665 Critical Warnings: 00:18:37.665 Available Spare Space: OK 00:18:37.665 Temperature: OK 00:18:37.665 Device Reliability: OK 00:18:37.665 Read Only: No 00:18:37.665 Volatile Memory Backup: OK 00:18:37.665 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:37.665 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:37.665 Available Spare: 0% 00:18:37.665 Availabl[2024-12-16 05:47:11.376964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:37.665 [2024-12-16 05:47:11.384853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:37.665 [2024-12-16 05:47:11.384882] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:37.665 [2024-12-16 05:47:11.384890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.665 [2024-12-16 05:47:11.384896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.665 [2024-12-16 05:47:11.384902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.665 [2024-12-16 05:47:11.384907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.665 [2024-12-16 05:47:11.384956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:37.665 [2024-12-16 05:47:11.384966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:37.665 [2024-12-16 05:47:11.385958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:37.665 [2024-12-16 05:47:11.386004] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:37.665 [2024-12-16 05:47:11.386011] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:37.665 [2024-12-16 05:47:11.386962] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:37.665 [2024-12-16 05:47:11.386973] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:37.665 [2024-12-16 05:47:11.387026] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:37.665 [2024-12-16 05:47:11.387978] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:37.665 e Spare Threshold: 0% 00:18:37.665 Life Percentage Used: 0% 00:18:37.665 Data Units Read: 0 00:18:37.665 Data Units Written: 0 00:18:37.665 Host Read Commands: 0 00:18:37.665 Host Write Commands: 0 00:18:37.665 Controller Busy Time: 0 minutes 00:18:37.665 Power Cycles: 0 00:18:37.665 Power On Hours: 0 hours 00:18:37.665 Unsafe Shutdowns: 0 00:18:37.665 Unrecoverable Media Errors: 0 00:18:37.665 Lifetime Error Log Entries: 0 00:18:37.665 Warning Temperature Time: 0 minutes 00:18:37.665 Critical Temperature Time: 0 minutes 00:18:37.665 00:18:37.665 Number of Queues 00:18:37.665 ================ 00:18:37.665 Number of I/O Submission Queues: 127 00:18:37.665 Number of I/O Completion Queues: 127 00:18:37.665 00:18:37.665 Active Namespaces 00:18:37.665 ================= 00:18:37.665 Namespace ID:1 00:18:37.665 Error Recovery Timeout: Unlimited 00:18:37.665 Command Set Identifier: NVM (00h) 00:18:37.665 Deallocate: Supported 00:18:37.665 Deallocated/Unwritten Error: Not Supported 00:18:37.665 Deallocated Read Value: Unknown 00:18:37.665 Deallocate in Write Zeroes: Not Supported 00:18:37.665 Deallocated Guard Field: 0xFFFF 00:18:37.666 Flush: Supported 00:18:37.666 Reservation: Supported 00:18:37.666 Namespace Sharing Capabilities: Multiple Controllers 00:18:37.666 Size (in LBAs): 131072 (0GiB) 00:18:37.666 Capacity (in LBAs): 131072 (0GiB) 00:18:37.666 Utilization (in LBAs): 131072 (0GiB) 00:18:37.666 NGUID: DFE8F40283794E3BA3938FF85F679CF8 00:18:37.666 UUID: dfe8f402-8379-4e3b-a393-8ff85f679cf8 00:18:37.666 Thin Provisioning: Not Supported 00:18:37.666 Per-NS Atomic Units: Yes 00:18:37.666 Atomic Boundary Size (Normal): 0 00:18:37.666 Atomic Boundary Size (PFail): 0 00:18:37.666 Atomic Boundary Offset: 0 00:18:37.666 Maximum Single Source Range Length: 65535 00:18:37.666 Maximum Copy Length: 65535 00:18:37.666 Maximum Source Range Count: 1 00:18:37.666 NGUID/EUI64 Never Reused: No 00:18:37.666 Namespace Write Protected: No 00:18:37.666 Number of LBA Formats: 1 00:18:37.666 Current LBA Format: LBA Format #00 00:18:37.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:37.666 00:18:37.666 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:37.922 [2024-12-16 05:47:11.588997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:43.304 Initializing NVMe Controllers 00:18:43.305 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:43.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:43.305 Initialization complete. Launching workers. 00:18:43.305 ======================================================== 00:18:43.305 Latency(us) 00:18:43.305 Device Information : IOPS MiB/s Average min max 00:18:43.305 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39996.14 156.23 3200.14 942.18 8327.09 00:18:43.305 ======================================================== 00:18:43.305 Total : 39996.14 156.23 3200.14 942.18 8327.09 00:18:43.305 00:18:43.305 [2024-12-16 05:47:16.701105] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:43.305 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:43.305 [2024-12-16 05:47:16.919758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:48.563 Initializing NVMe Controllers 00:18:48.563 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:48.563 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:48.563 Initialization complete. Launching workers. 00:18:48.563 ======================================================== 00:18:48.563 Latency(us) 00:18:48.563 Device Information : IOPS MiB/s Average min max 00:18:48.563 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39933.00 155.99 3207.56 962.66 6687.68 00:18:48.563 ======================================================== 00:18:48.563 Total : 39933.00 155.99 3207.56 962.66 6687.68 00:18:48.563 00:18:48.563 [2024-12-16 05:47:21.939585] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:48.563 05:47:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:48.563 [2024-12-16 05:47:22.139250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.822 [2024-12-16 05:47:27.274935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.822 Initializing NVMe Controllers 00:18:53.822 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.822 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:53.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:53.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:53.822 Initialization complete. Launching workers. 00:18:53.822 Starting thread on core 2 00:18:53.822 Starting thread on core 3 00:18:53.822 Starting thread on core 1 00:18:53.822 05:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:53.822 [2024-12-16 05:47:27.557263] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:57.102 [2024-12-16 05:47:30.620613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:57.102 Initializing NVMe Controllers 00:18:57.102 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.102 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.102 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:57.102 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:57.102 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:57.102 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:57.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:57.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:57.102 Initialization complete. Launching workers. 00:18:57.102 Starting thread on core 1 with urgent priority queue 00:18:57.102 Starting thread on core 2 with urgent priority queue 00:18:57.102 Starting thread on core 3 with urgent priority queue 00:18:57.102 Starting thread on core 0 with urgent priority queue 00:18:57.102 SPDK bdev Controller (SPDK2 ) core 0: 9659.00 IO/s 10.35 secs/100000 ios 00:18:57.102 SPDK bdev Controller (SPDK2 ) core 1: 9691.67 IO/s 10.32 secs/100000 ios 00:18:57.102 SPDK bdev Controller (SPDK2 ) core 2: 7798.33 IO/s 12.82 secs/100000 ios 00:18:57.102 SPDK bdev Controller (SPDK2 ) core 3: 10984.33 IO/s 9.10 secs/100000 ios 00:18:57.102 ======================================================== 00:18:57.102 00:18:57.102 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:57.102 [2024-12-16 05:47:30.890396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:57.102 Initializing NVMe Controllers 00:18:57.102 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.102 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:57.102 Namespace ID: 1 size: 0GB 00:18:57.102 Initialization complete. 00:18:57.102 INFO: using host memory buffer for IO 00:18:57.102 Hello world! 00:18:57.102 [2024-12-16 05:47:30.898450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:57.102 05:47:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:57.359 [2024-12-16 05:47:31.159589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.730 Initializing NVMe Controllers 00:18:58.730 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.730 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:58.730 Initialization complete. Launching workers. 00:18:58.730 submit (in ns) avg, min, max = 6038.5, 3138.1, 3999249.5 00:18:58.730 complete (in ns) avg, min, max = 21683.4, 1707.6, 4995288.6 00:18:58.730 00:18:58.730 Submit histogram 00:18:58.730 ================ 00:18:58.730 Range in us Cumulative Count 00:18:58.730 3.124 - 3.139: 0.0059% ( 1) 00:18:58.730 3.139 - 3.154: 0.0236% ( 3) 00:18:58.730 3.154 - 3.170: 0.0827% ( 10) 00:18:58.730 3.170 - 3.185: 0.1240% ( 7) 00:18:58.730 3.185 - 3.200: 0.3840% ( 44) 00:18:58.730 3.200 - 3.215: 1.6776% ( 219) 00:18:58.730 3.215 - 3.230: 5.2513% ( 605) 00:18:58.730 3.230 - 3.246: 10.6267% ( 910) 00:18:58.730 3.246 - 3.261: 16.5633% ( 1005) 00:18:58.730 3.261 - 3.276: 24.2779% ( 1306) 00:18:58.730 3.276 - 3.291: 31.9570% ( 1300) 00:18:58.730 3.291 - 3.307: 38.4370% ( 1097) 00:18:58.730 3.307 - 3.322: 43.8124% ( 910) 00:18:58.730 3.322 - 3.337: 48.5616% ( 804) 00:18:58.730 3.337 - 3.352: 52.5843% ( 681) 00:18:58.730 3.352 - 3.368: 56.6720% ( 692) 00:18:58.730 3.368 - 3.383: 62.9216% ( 1058) 00:18:58.730 3.383 - 3.398: 69.1772% ( 1059) 00:18:58.730 3.398 - 3.413: 74.6352% ( 924) 00:18:58.730 3.413 - 3.429: 79.5440% ( 831) 00:18:58.730 3.429 - 3.444: 83.1532% ( 611) 00:18:58.730 3.444 - 3.459: 85.7818% ( 445) 00:18:58.730 3.459 - 3.474: 87.0105% ( 208) 00:18:58.730 3.474 - 3.490: 87.6425% ( 107) 00:18:58.730 3.490 - 3.505: 88.0619% ( 71) 00:18:58.730 3.505 - 3.520: 88.5640% ( 85) 00:18:58.730 3.520 - 3.535: 89.2256% ( 112) 00:18:58.730 3.535 - 3.550: 90.0467% ( 139) 00:18:58.730 3.550 - 3.566: 91.0863% ( 176) 00:18:58.730 3.566 - 3.581: 92.0610% ( 165) 00:18:58.730 3.581 - 3.596: 92.9057% ( 143) 00:18:58.730 3.596 - 3.611: 93.7622% ( 145) 00:18:58.730 3.611 - 3.627: 94.7014% ( 159) 00:18:58.730 3.627 - 3.642: 95.6170% ( 155) 00:18:58.730 3.642 - 3.657: 96.4144% ( 135) 00:18:58.730 3.657 - 3.672: 97.1764% ( 129) 00:18:58.730 3.672 - 3.688: 97.7494% ( 97) 00:18:58.730 3.688 - 3.703: 98.2161% ( 79) 00:18:58.730 3.703 - 3.718: 98.5823% ( 62) 00:18:58.730 3.718 - 3.733: 98.9249% ( 58) 00:18:58.730 3.733 - 3.749: 99.2321% ( 52) 00:18:58.730 3.749 - 3.764: 99.3502% ( 20) 00:18:58.730 3.764 - 3.779: 99.4625% ( 19) 00:18:58.730 3.779 - 3.794: 99.5570% ( 16) 00:18:58.730 3.794 - 3.810: 99.5747% ( 3) 00:18:58.730 3.810 - 3.825: 99.6042% ( 5) 00:18:58.730 3.825 - 3.840: 99.6279% ( 4) 00:18:58.730 3.840 - 3.855: 99.6397% ( 2) 00:18:58.730 4.084 - 4.114: 99.6456% ( 1) 00:18:58.730 4.206 - 4.236: 99.6515% ( 1) 00:18:58.730 5.029 - 5.059: 99.6574% ( 1) 00:18:58.730 5.090 - 5.120: 99.6633% ( 1) 00:18:58.730 5.120 - 5.150: 99.6692% ( 1) 00:18:58.730 5.394 - 5.425: 99.6751% ( 1) 00:18:58.730 5.425 - 5.455: 99.6810% ( 1) 00:18:58.730 5.669 - 5.699: 99.6928% ( 2) 00:18:58.730 5.821 - 5.851: 99.6987% ( 1) 00:18:58.730 5.851 - 5.882: 99.7046% ( 1) 00:18:58.730 5.882 - 5.912: 99.7106% ( 1) 00:18:58.730 5.912 - 5.943: 99.7165% ( 1) 00:18:58.730 5.943 - 5.973: 99.7224% ( 1) 00:18:58.730 5.973 - 6.004: 99.7283% ( 1) 00:18:58.730 6.004 - 6.034: 99.7342% ( 1) 00:18:58.730 6.034 - 6.065: 99.7519% ( 3) 00:18:58.730 6.126 - 6.156: 99.7696% ( 3) 00:18:58.730 6.217 - 6.248: 99.7755% ( 1) 00:18:58.730 6.370 - 6.400: 99.7873% ( 2) 00:18:58.730 6.461 - 6.491: 99.7933% ( 1) 00:18:58.730 6.522 - 6.552: 99.8051% ( 2) 00:18:58.730 6.674 - 6.705: 99.8110% ( 1) 00:18:58.730 6.705 - 6.735: 99.8228% ( 2) 00:18:58.730 6.827 - 6.857: 99.8287% ( 1) 00:18:58.730 6.888 - 6.918: 99.8346% ( 1) 00:18:58.730 7.162 - 7.192: 99.8405% ( 1) 00:18:58.730 7.253 - 7.284: 99.8523% ( 2) 00:18:58.730 7.345 - 7.375: 99.8700% ( 3) 00:18:58.730 7.467 - 7.497: 99.8760% ( 1) 00:18:58.730 7.497 - 7.528: 99.8819% ( 1) 00:18:58.730 7.558 - 7.589: 99.8878% ( 1) 00:18:58.730 [2024-12-16 05:47:32.254099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.730 7.589 - 7.619: 99.8937% ( 1) 00:18:58.730 7.650 - 7.680: 99.9055% ( 2) 00:18:58.730 8.350 - 8.411: 99.9114% ( 1) 00:18:58.730 8.472 - 8.533: 99.9173% ( 1) 00:18:58.730 8.594 - 8.655: 99.9232% ( 1) 00:18:58.730 9.021 - 9.082: 99.9291% ( 1) 00:18:58.730 2012.891 - 2028.495: 99.9350% ( 1) 00:18:58.730 3167.573 - 3183.177: 99.9409% ( 1) 00:18:58.730 3994.575 - 4025.783: 100.0000% ( 10) 00:18:58.730 00:18:58.730 Complete histogram 00:18:58.730 ================== 00:18:58.730 Range in us Cumulative Count 00:18:58.730 1.707 - 1.714: 0.0945% ( 16) 00:18:58.730 1.714 - 1.722: 0.2422% ( 25) 00:18:58.730 1.722 - 1.730: 0.3426% ( 17) 00:18:58.730 1.730 - 1.737: 0.3721% ( 5) 00:18:58.731 1.737 - 1.745: 0.4017% ( 5) 00:18:58.731 1.745 - 1.752: 0.5730% ( 29) 00:18:58.731 1.752 - 1.760: 3.8514% ( 555) 00:18:58.731 1.760 - 1.768: 17.4907% ( 2309) 00:18:58.731 1.768 - 1.775: 32.9671% ( 2620) 00:18:58.731 1.775 - 1.783: 39.5239% ( 1110) 00:18:58.731 1.783 - 1.790: 41.7154% ( 371) 00:18:58.731 1.790 - 1.798: 43.9246% ( 374) 00:18:58.731 1.798 - 1.806: 52.8560% ( 1512) 00:18:58.731 1.806 - 1.813: 72.3197% ( 3295) 00:18:58.731 1.813 - 1.821: 88.0206% ( 2658) 00:18:58.731 1.821 - 1.829: 93.6677% ( 956) 00:18:58.731 1.829 - 1.836: 95.8651% ( 372) 00:18:58.731 1.836 - 1.844: 97.4482% ( 268) 00:18:58.731 1.844 - 1.851: 98.2574% ( 137) 00:18:58.731 1.851 - 1.859: 98.6650% ( 69) 00:18:58.731 1.859 - 1.867: 98.8481% ( 31) 00:18:58.731 1.867 - 1.874: 98.9604% ( 19) 00:18:58.731 1.874 - 1.882: 99.0312% ( 12) 00:18:58.731 1.882 - 1.890: 99.1317% ( 17) 00:18:58.731 1.890 - 1.897: 99.1789% ( 8) 00:18:58.731 1.897 - 1.905: 99.1966% ( 3) 00:18:58.731 1.905 - 1.912: 99.2262% ( 5) 00:18:58.731 1.912 - 1.920: 99.2616% ( 6) 00:18:58.731 1.920 - 1.928: 99.2734% ( 2) 00:18:58.731 1.950 - 1.966: 99.2793% ( 1) 00:18:58.731 1.966 - 1.981: 99.2912% ( 2) 00:18:58.731 1.981 - 1.996: 99.3030% ( 2) 00:18:58.731 1.996 - 2.011: 99.3089% ( 1) 00:18:58.731 2.210 - 2.225: 99.3148% ( 1) 00:18:58.731 2.316 - 2.331: 99.3207% ( 1) 00:18:58.731 2.331 - 2.347: 99.3325% ( 2) 00:18:58.731 3.352 - 3.368: 99.3384% ( 1) 00:18:58.731 3.383 - 3.398: 99.3443% ( 1) 00:18:58.731 3.520 - 3.535: 99.3502% ( 1) 00:18:58.731 3.749 - 3.764: 99.3561% ( 1) 00:18:58.731 4.053 - 4.084: 99.3679% ( 2) 00:18:58.731 4.358 - 4.389: 99.3739% ( 1) 00:18:58.731 4.450 - 4.480: 99.3798% ( 1) 00:18:58.731 4.663 - 4.693: 99.3857% ( 1) 00:18:58.731 4.724 - 4.754: 99.3916% ( 1) 00:18:58.731 4.815 - 4.846: 99.3975% ( 1) 00:18:58.731 4.876 - 4.907: 99.4034% ( 1) 00:18:58.731 4.998 - 5.029: 99.4093% ( 1) 00:18:58.731 5.333 - 5.364: 99.4270% ( 3) 00:18:58.731 5.455 - 5.486: 99.4329% ( 1) 00:18:58.731 5.608 - 5.638: 99.4388% ( 1) 00:18:58.731 5.882 - 5.912: 99.4447% ( 1) 00:18:58.731 6.034 - 6.065: 99.4506% ( 1) 00:18:58.731 6.065 - 6.095: 99.4566% ( 1) 00:18:58.731 6.217 - 6.248: 99.4625% ( 1) 00:18:58.731 6.430 - 6.461: 99.4684% ( 1) 00:18:58.731 7.162 - 7.192: 99.4743% ( 1) 00:18:58.731 8.350 - 8.411: 99.4802% ( 1) 00:18:58.731 8.777 - 8.838: 99.4861% ( 1) 00:18:58.731 13.653 - 13.714: 99.4920% ( 1) 00:18:58.731 14.629 - 14.690: 99.4979% ( 1) 00:18:58.731 38.766 - 39.010: 99.5038% ( 1) 00:18:58.731 3011.535 - 3027.139: 99.5097% ( 1) 00:18:58.731 3978.971 - 3994.575: 99.5156% ( 1) 00:18:58.731 3994.575 - 4025.783: 99.9882% ( 80) 00:18:58.731 4962.011 - 4993.219: 99.9941% ( 1) 00:18:58.731 4993.219 - 5024.427: 100.0000% ( 1) 00:18:58.731 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:58.731 [ 00:18:58.731 { 00:18:58.731 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:58.731 "subtype": "Discovery", 00:18:58.731 "listen_addresses": [], 00:18:58.731 "allow_any_host": true, 00:18:58.731 "hosts": [] 00:18:58.731 }, 00:18:58.731 { 00:18:58.731 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:58.731 "subtype": "NVMe", 00:18:58.731 "listen_addresses": [ 00:18:58.731 { 00:18:58.731 "trtype": "VFIOUSER", 00:18:58.731 "adrfam": "IPv4", 00:18:58.731 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:58.731 "trsvcid": "0" 00:18:58.731 } 00:18:58.731 ], 00:18:58.731 "allow_any_host": true, 00:18:58.731 "hosts": [], 00:18:58.731 "serial_number": "SPDK1", 00:18:58.731 "model_number": "SPDK bdev Controller", 00:18:58.731 "max_namespaces": 32, 00:18:58.731 "min_cntlid": 1, 00:18:58.731 "max_cntlid": 65519, 00:18:58.731 "namespaces": [ 00:18:58.731 { 00:18:58.731 "nsid": 1, 00:18:58.731 "bdev_name": "Malloc1", 00:18:58.731 "name": "Malloc1", 00:18:58.731 "nguid": "620D6963E5D449B78FA91A3F64BD26E8", 00:18:58.731 "uuid": "620d6963-e5d4-49b7-8fa9-1a3f64bd26e8" 00:18:58.731 }, 00:18:58.731 { 00:18:58.731 "nsid": 2, 00:18:58.731 "bdev_name": "Malloc3", 00:18:58.731 "name": "Malloc3", 00:18:58.731 "nguid": "8F60D37914A442FDBF910BCB248BDF7D", 00:18:58.731 "uuid": "8f60d379-14a4-42fd-bf91-0bcb248bdf7d" 00:18:58.731 } 00:18:58.731 ] 00:18:58.731 }, 00:18:58.731 { 00:18:58.731 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:58.731 "subtype": "NVMe", 00:18:58.731 "listen_addresses": [ 00:18:58.731 { 00:18:58.731 "trtype": "VFIOUSER", 00:18:58.731 "adrfam": "IPv4", 00:18:58.731 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:58.731 "trsvcid": "0" 00:18:58.731 } 00:18:58.731 ], 00:18:58.731 "allow_any_host": true, 00:18:58.731 "hosts": [], 00:18:58.731 "serial_number": "SPDK2", 00:18:58.731 "model_number": "SPDK bdev Controller", 00:18:58.731 "max_namespaces": 32, 00:18:58.731 "min_cntlid": 1, 00:18:58.731 "max_cntlid": 65519, 00:18:58.731 "namespaces": [ 00:18:58.731 { 00:18:58.731 "nsid": 1, 00:18:58.731 "bdev_name": "Malloc2", 00:18:58.731 "name": "Malloc2", 00:18:58.731 "nguid": "DFE8F40283794E3BA3938FF85F679CF8", 00:18:58.731 "uuid": "dfe8f402-8379-4e3b-a393-8ff85f679cf8" 00:18:58.731 } 00:18:58.731 ] 00:18:58.731 } 00:18:58.731 ] 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3347463 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:58.731 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:58.989 [2024-12-16 05:47:32.618320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.989 Malloc4 00:18:58.989 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:59.246 [2024-12-16 05:47:32.883329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:59.246 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:59.246 Asynchronous Event Request test 00:18:59.246 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.246 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:59.246 Registering asynchronous event callbacks... 00:18:59.246 Starting namespace attribute notice tests for all controllers... 00:18:59.246 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:59.246 aer_cb - Changed Namespace 00:18:59.246 Cleaning up... 00:18:59.246 [ 00:18:59.246 { 00:18:59.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:59.246 "subtype": "Discovery", 00:18:59.246 "listen_addresses": [], 00:18:59.246 "allow_any_host": true, 00:18:59.246 "hosts": [] 00:18:59.246 }, 00:18:59.246 { 00:18:59.246 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:59.246 "subtype": "NVMe", 00:18:59.246 "listen_addresses": [ 00:18:59.246 { 00:18:59.246 "trtype": "VFIOUSER", 00:18:59.246 "adrfam": "IPv4", 00:18:59.246 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:59.246 "trsvcid": "0" 00:18:59.246 } 00:18:59.246 ], 00:18:59.246 "allow_any_host": true, 00:18:59.246 "hosts": [], 00:18:59.246 "serial_number": "SPDK1", 00:18:59.246 "model_number": "SPDK bdev Controller", 00:18:59.246 "max_namespaces": 32, 00:18:59.246 "min_cntlid": 1, 00:18:59.246 "max_cntlid": 65519, 00:18:59.246 "namespaces": [ 00:18:59.246 { 00:18:59.246 "nsid": 1, 00:18:59.246 "bdev_name": "Malloc1", 00:18:59.246 "name": "Malloc1", 00:18:59.246 "nguid": "620D6963E5D449B78FA91A3F64BD26E8", 00:18:59.246 "uuid": "620d6963-e5d4-49b7-8fa9-1a3f64bd26e8" 00:18:59.246 }, 00:18:59.246 { 00:18:59.246 "nsid": 2, 00:18:59.246 "bdev_name": "Malloc3", 00:18:59.246 "name": "Malloc3", 00:18:59.246 "nguid": "8F60D37914A442FDBF910BCB248BDF7D", 00:18:59.246 "uuid": "8f60d379-14a4-42fd-bf91-0bcb248bdf7d" 00:18:59.246 } 00:18:59.246 ] 00:18:59.246 }, 00:18:59.246 { 00:18:59.246 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:59.246 "subtype": "NVMe", 00:18:59.246 "listen_addresses": [ 00:18:59.246 { 00:18:59.246 "trtype": "VFIOUSER", 00:18:59.246 "adrfam": "IPv4", 00:18:59.246 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:59.246 "trsvcid": "0" 00:18:59.246 } 00:18:59.246 ], 00:18:59.246 "allow_any_host": true, 00:18:59.246 "hosts": [], 00:18:59.246 "serial_number": "SPDK2", 00:18:59.246 "model_number": "SPDK bdev Controller", 00:18:59.246 "max_namespaces": 32, 00:18:59.246 "min_cntlid": 1, 00:18:59.246 "max_cntlid": 65519, 00:18:59.246 "namespaces": [ 00:18:59.246 { 00:18:59.246 "nsid": 1, 00:18:59.246 "bdev_name": "Malloc2", 00:18:59.246 "name": "Malloc2", 00:18:59.246 "nguid": "DFE8F40283794E3BA3938FF85F679CF8", 00:18:59.246 "uuid": "dfe8f402-8379-4e3b-a393-8ff85f679cf8" 00:18:59.246 }, 00:18:59.246 { 00:18:59.246 "nsid": 2, 00:18:59.246 "bdev_name": "Malloc4", 00:18:59.246 "name": "Malloc4", 00:18:59.247 "nguid": "039707885A534AA0A254CA409173BEB7", 00:18:59.247 "uuid": "03970788-5a53-4aa0-a254-ca409173beb7" 00:18:59.247 } 00:18:59.247 ] 00:18:59.247 } 00:18:59.247 ] 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3347463 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3339954 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3339954 ']' 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3339954 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3339954 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3339954' 00:18:59.505 killing process with pid 3339954 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3339954 00:18:59.505 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3339954 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3347594 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3347594' 00:18:59.763 Process pid: 3347594 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3347594 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3347594 ']' 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.763 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:59.763 [2024-12-16 05:47:33.448952] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:59.763 [2024-12-16 05:47:33.449889] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:59.763 [2024-12-16 05:47:33.449927] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.764 [2024-12-16 05:47:33.507646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.764 [2024-12-16 05:47:33.546747] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.764 [2024-12-16 05:47:33.546788] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.764 [2024-12-16 05:47:33.546796] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.764 [2024-12-16 05:47:33.546803] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.764 [2024-12-16 05:47:33.546807] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.764 [2024-12-16 05:47:33.546858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.764 [2024-12-16 05:47:33.546925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.764 [2024-12-16 05:47:33.547013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.764 [2024-12-16 05:47:33.547014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.022 [2024-12-16 05:47:33.621640] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:00.022 [2024-12-16 05:47:33.621771] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:00.022 [2024-12-16 05:47:33.621993] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:00.022 [2024-12-16 05:47:33.622347] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:00.022 [2024-12-16 05:47:33.622611] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:00.022 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.022 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:00.022 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:00.958 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:01.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:01.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:01.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:01.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:01.216 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:01.216 Malloc1 00:19:01.216 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:01.474 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:01.731 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:01.988 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:01.988 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:01.988 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:01.988 Malloc2 00:19:01.988 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:02.245 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:02.502 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3347594 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3347594 ']' 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3347594 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3347594 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3347594' 00:19:02.760 killing process with pid 3347594 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3347594 00:19:02.760 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3347594 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:03.018 00:19:03.018 real 0m50.552s 00:19:03.018 user 3m15.740s 00:19:03.018 sys 0m3.133s 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:03.018 ************************************ 00:19:03.018 END TEST nvmf_vfio_user 00:19:03.018 ************************************ 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.018 ************************************ 00:19:03.018 START TEST nvmf_vfio_user_nvme_compliance 00:19:03.018 ************************************ 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:03.018 * Looking for test storage... 00:19:03.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:19:03.018 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:03.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.277 --rc genhtml_branch_coverage=1 00:19:03.277 --rc genhtml_function_coverage=1 00:19:03.277 --rc genhtml_legend=1 00:19:03.277 --rc geninfo_all_blocks=1 00:19:03.277 --rc geninfo_unexecuted_blocks=1 00:19:03.277 00:19:03.277 ' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:03.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.277 --rc genhtml_branch_coverage=1 00:19:03.277 --rc genhtml_function_coverage=1 00:19:03.277 --rc genhtml_legend=1 00:19:03.277 --rc geninfo_all_blocks=1 00:19:03.277 --rc geninfo_unexecuted_blocks=1 00:19:03.277 00:19:03.277 ' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:03.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.277 --rc genhtml_branch_coverage=1 00:19:03.277 --rc genhtml_function_coverage=1 00:19:03.277 --rc genhtml_legend=1 00:19:03.277 --rc geninfo_all_blocks=1 00:19:03.277 --rc geninfo_unexecuted_blocks=1 00:19:03.277 00:19:03.277 ' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:03.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.277 --rc genhtml_branch_coverage=1 00:19:03.277 --rc genhtml_function_coverage=1 00:19:03.277 --rc genhtml_legend=1 00:19:03.277 --rc geninfo_all_blocks=1 00:19:03.277 --rc geninfo_unexecuted_blocks=1 00:19:03.277 00:19:03.277 ' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.277 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3348224 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3348224' 00:19:03.278 Process pid: 3348224 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3348224 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3348224 ']' 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.278 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:03.278 [2024-12-16 05:47:36.992406] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:03.278 [2024-12-16 05:47:36.992452] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.278 [2024-12-16 05:47:37.046620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:03.278 [2024-12-16 05:47:37.086990] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.278 [2024-12-16 05:47:37.087026] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.278 [2024-12-16 05:47:37.087033] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.278 [2024-12-16 05:47:37.087039] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.278 [2024-12-16 05:47:37.087045] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.278 [2024-12-16 05:47:37.087085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.278 [2024-12-16 05:47:37.087184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.278 [2024-12-16 05:47:37.087186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.536 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.536 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:03.536 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 malloc0 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.469 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:04.726 00:19:04.726 00:19:04.726 CUnit - A unit testing framework for C - Version 2.1-3 00:19:04.726 http://cunit.sourceforge.net/ 00:19:04.726 00:19:04.726 00:19:04.726 Suite: nvme_compliance 00:19:04.726 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-16 05:47:38.388999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.726 [2024-12-16 05:47:38.390330] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:04.726 [2024-12-16 05:47:38.390347] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:04.726 [2024-12-16 05:47:38.390352] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:04.726 [2024-12-16 05:47:38.392017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.726 passed 00:19:04.726 Test: admin_identify_ctrlr_verify_fused ...[2024-12-16 05:47:38.470581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.726 [2024-12-16 05:47:38.473608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.726 passed 00:19:04.726 Test: admin_identify_ns ...[2024-12-16 05:47:38.553171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.983 [2024-12-16 05:47:38.612858] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:04.984 [2024-12-16 05:47:38.620857] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:04.984 [2024-12-16 05:47:38.641944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.984 passed 00:19:04.984 Test: admin_get_features_mandatory_features ...[2024-12-16 05:47:38.717345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.984 [2024-12-16 05:47:38.720366] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.984 passed 00:19:04.984 Test: admin_get_features_optional_features ...[2024-12-16 05:47:38.796889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.984 [2024-12-16 05:47:38.800919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.984 passed 00:19:05.241 Test: admin_set_features_number_of_queues ...[2024-12-16 05:47:38.876174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.241 [2024-12-16 05:47:38.983949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.241 passed 00:19:05.241 Test: admin_get_log_page_mandatory_logs ...[2024-12-16 05:47:39.055502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.241 [2024-12-16 05:47:39.058526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.241 passed 00:19:05.498 Test: admin_get_log_page_with_lpo ...[2024-12-16 05:47:39.136160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.498 [2024-12-16 05:47:39.204861] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:05.498 [2024-12-16 05:47:39.217907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.498 passed 00:19:05.498 Test: fabric_property_get ...[2024-12-16 05:47:39.290401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.498 [2024-12-16 05:47:39.291632] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:05.498 [2024-12-16 05:47:39.293430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.498 passed 00:19:05.761 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-16 05:47:39.370911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.761 [2024-12-16 05:47:39.372135] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:05.761 [2024-12-16 05:47:39.373931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.761 passed 00:19:05.761 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-16 05:47:39.449006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.761 [2024-12-16 05:47:39.532856] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.761 [2024-12-16 05:47:39.548851] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:05.761 [2024-12-16 05:47:39.556952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.761 passed 00:19:06.018 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-16 05:47:39.628452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.018 [2024-12-16 05:47:39.629697] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:06.018 [2024-12-16 05:47:39.631477] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.018 passed 00:19:06.018 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-16 05:47:39.708101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.018 [2024-12-16 05:47:39.787858] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:06.018 [2024-12-16 05:47:39.811863] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:06.018 [2024-12-16 05:47:39.816928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.018 passed 00:19:06.275 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-16 05:47:39.890415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.275 [2024-12-16 05:47:39.891640] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:06.275 [2024-12-16 05:47:39.891664] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:06.275 [2024-12-16 05:47:39.893437] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.275 passed 00:19:06.275 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-16 05:47:39.971024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.275 [2024-12-16 05:47:40.063860] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:06.275 [2024-12-16 05:47:40.071852] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:06.275 [2024-12-16 05:47:40.079856] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:06.275 [2024-12-16 05:47:40.087858] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:06.275 [2024-12-16 05:47:40.117025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.533 passed 00:19:06.533 Test: admin_create_io_sq_verify_pc ...[2024-12-16 05:47:40.194054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:06.533 [2024-12-16 05:47:40.210864] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:06.533 [2024-12-16 05:47:40.228245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.533 passed 00:19:06.533 Test: admin_create_io_qp_max_qps ...[2024-12-16 05:47:40.303747] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:07.905 [2024-12-16 05:47:41.396855] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:08.163 [2024-12-16 05:47:41.789614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:08.163 passed 00:19:08.163 Test: admin_create_io_sq_shared_cq ...[2024-12-16 05:47:41.865446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:08.164 [2024-12-16 05:47:41.997851] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:08.421 [2024-12-16 05:47:42.034923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:08.421 passed 00:19:08.421 00:19:08.421 Run Summary: Type Total Ran Passed Failed Inactive 00:19:08.421 suites 1 1 n/a 0 0 00:19:08.421 tests 18 18 18 0 0 00:19:08.421 asserts 360 360 360 0 n/a 00:19:08.421 00:19:08.421 Elapsed time = 1.498 seconds 00:19:08.421 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3348224 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3348224 ']' 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3348224 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3348224 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3348224' 00:19:08.422 killing process with pid 3348224 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3348224 00:19:08.422 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3348224 00:19:08.679 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:08.679 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:08.679 00:19:08.679 real 0m5.558s 00:19:08.679 user 0m15.587s 00:19:08.679 sys 0m0.505s 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.680 ************************************ 00:19:08.680 END TEST nvmf_vfio_user_nvme_compliance 00:19:08.680 ************************************ 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.680 ************************************ 00:19:08.680 START TEST nvmf_vfio_user_fuzz 00:19:08.680 ************************************ 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:08.680 * Looking for test storage... 00:19:08.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.680 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:08.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.939 --rc genhtml_branch_coverage=1 00:19:08.939 --rc genhtml_function_coverage=1 00:19:08.939 --rc genhtml_legend=1 00:19:08.939 --rc geninfo_all_blocks=1 00:19:08.939 --rc geninfo_unexecuted_blocks=1 00:19:08.939 00:19:08.939 ' 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:08.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.939 --rc genhtml_branch_coverage=1 00:19:08.939 --rc genhtml_function_coverage=1 00:19:08.939 --rc genhtml_legend=1 00:19:08.939 --rc geninfo_all_blocks=1 00:19:08.939 --rc geninfo_unexecuted_blocks=1 00:19:08.939 00:19:08.939 ' 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:08.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.939 --rc genhtml_branch_coverage=1 00:19:08.939 --rc genhtml_function_coverage=1 00:19:08.939 --rc genhtml_legend=1 00:19:08.939 --rc geninfo_all_blocks=1 00:19:08.939 --rc geninfo_unexecuted_blocks=1 00:19:08.939 00:19:08.939 ' 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:08.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.939 --rc genhtml_branch_coverage=1 00:19:08.939 --rc genhtml_function_coverage=1 00:19:08.939 --rc genhtml_legend=1 00:19:08.939 --rc geninfo_all_blocks=1 00:19:08.939 --rc geninfo_unexecuted_blocks=1 00:19:08.939 00:19:08.939 ' 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.939 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3349202 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3349202' 00:19:08.940 Process pid: 3349202 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3349202 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3349202 ']' 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.940 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:09.198 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.198 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:09.198 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:10.132 malloc0 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:10.132 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:42.189 Fuzzing completed. Shutting down the fuzz application 00:19:42.189 00:19:42.189 Dumping successful admin opcodes: 00:19:42.189 8, 9, 10, 24, 00:19:42.189 Dumping successful io opcodes: 00:19:42.189 0, 00:19:42.189 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1151613, total successful commands: 4532, random_seed: 338161536 00:19:42.189 NS: 0x200003a1ef00 admin qp, Total commands completed: 285641, total successful commands: 2302, random_seed: 3988799168 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3349202 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3349202 ']' 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3349202 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3349202 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3349202' 00:19:42.189 killing process with pid 3349202 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3349202 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3349202 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:42.189 00:19:42.189 real 0m32.160s 00:19:42.189 user 0m34.098s 00:19:42.189 sys 0m26.330s 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:42.189 ************************************ 00:19:42.189 END TEST nvmf_vfio_user_fuzz 00:19:42.189 ************************************ 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:42.189 ************************************ 00:19:42.189 START TEST nvmf_auth_target 00:19:42.189 ************************************ 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:42.189 * Looking for test storage... 00:19:42.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:42.189 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:42.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.190 --rc genhtml_branch_coverage=1 00:19:42.190 --rc genhtml_function_coverage=1 00:19:42.190 --rc genhtml_legend=1 00:19:42.190 --rc geninfo_all_blocks=1 00:19:42.190 --rc geninfo_unexecuted_blocks=1 00:19:42.190 00:19:42.190 ' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:42.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.190 --rc genhtml_branch_coverage=1 00:19:42.190 --rc genhtml_function_coverage=1 00:19:42.190 --rc genhtml_legend=1 00:19:42.190 --rc geninfo_all_blocks=1 00:19:42.190 --rc geninfo_unexecuted_blocks=1 00:19:42.190 00:19:42.190 ' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:42.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.190 --rc genhtml_branch_coverage=1 00:19:42.190 --rc genhtml_function_coverage=1 00:19:42.190 --rc genhtml_legend=1 00:19:42.190 --rc geninfo_all_blocks=1 00:19:42.190 --rc geninfo_unexecuted_blocks=1 00:19:42.190 00:19:42.190 ' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:42.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.190 --rc genhtml_branch_coverage=1 00:19:42.190 --rc genhtml_function_coverage=1 00:19:42.190 --rc genhtml_legend=1 00:19:42.190 --rc geninfo_all_blocks=1 00:19:42.190 --rc geninfo_unexecuted_blocks=1 00:19:42.190 00:19:42.190 ' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:42.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:42.190 05:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:46.377 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:46.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:46.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:46.378 Found net devices under 0000:af:00.0: cvl_0_0 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:46.378 Found net devices under 0000:af:00.1: cvl_0_1 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.378 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:46.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:19:46.636 00:19:46.636 --- 10.0.0.2 ping statistics --- 00:19:46.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.636 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:19:46.636 00:19:46.636 --- 10.0.0.1 ping statistics --- 00:19:46.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.636 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:46.636 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3357927 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3357927 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3357927 ']' 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.637 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3358047 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c89513b8a34a797ed287489aeb1930770006da073e6bc368 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.A01 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c89513b8a34a797ed287489aeb1930770006da073e6bc368 0 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c89513b8a34a797ed287489aeb1930770006da073e6bc368 0 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c89513b8a34a797ed287489aeb1930770006da073e6bc368 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.A01 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.A01 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.A01 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=fc9271fe8ea6c0d9e9b13d40b6dcd8dceb00b9fc7bc3e673ad38cb8ccad25698 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.eIC 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key fc9271fe8ea6c0d9e9b13d40b6dcd8dceb00b9fc7bc3e673ad38cb8ccad25698 3 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 fc9271fe8ea6c0d9e9b13d40b6dcd8dceb00b9fc7bc3e673ad38cb8ccad25698 3 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=fc9271fe8ea6c0d9e9b13d40b6dcd8dceb00b9fc7bc3e673ad38cb8ccad25698 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:46.895 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.eIC 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.eIC 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.eIC 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=20b64399b7bcde61f7f5dc9801fc8133 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.vc9 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 20b64399b7bcde61f7f5dc9801fc8133 1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 20b64399b7bcde61f7f5dc9801fc8133 1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=20b64399b7bcde61f7f5dc9801fc8133 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.vc9 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.vc9 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vc9 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=4c510721c273b37bb86781140b2d5fef36ad6f3cff6d5cd8 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.MFx 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 4c510721c273b37bb86781140b2d5fef36ad6f3cff6d5cd8 2 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 4c510721c273b37bb86781140b2d5fef36ad6f3cff6d5cd8 2 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=4c510721c273b37bb86781140b2d5fef36ad6f3cff6d5cd8 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.MFx 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.MFx 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.MFx 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=909f64378c6101590feb9760a8d599ad681c44567f6b8311 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.5NZ 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 909f64378c6101590feb9760a8d599ad681c44567f6b8311 2 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 909f64378c6101590feb9760a8d599ad681c44567f6b8311 2 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=909f64378c6101590feb9760a8d599ad681c44567f6b8311 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.5NZ 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.5NZ 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5NZ 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7415b14b71b7d9d6f4914ae43078f050 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.gcd 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7415b14b71b7d9d6f4914ae43078f050 1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7415b14b71b7d9d6f4914ae43078f050 1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7415b14b71b7d9d6f4914ae43078f050 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:47.154 05:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.gcd 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.gcd 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.gcd 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=4d432a12301c691eea56d8abc0e6349fc862eb3ee2f0fd39767c4d859d55639b 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.uoQ 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 4d432a12301c691eea56d8abc0e6349fc862eb3ee2f0fd39767c4d859d55639b 3 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 4d432a12301c691eea56d8abc0e6349fc862eb3ee2f0fd39767c4d859d55639b 3 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=4d432a12301c691eea56d8abc0e6349fc862eb3ee2f0fd39767c4d859d55639b 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.uoQ 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.uoQ 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.uoQ 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3357927 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3357927 ']' 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.413 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3358047 /var/tmp/host.sock 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3358047 ']' 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:47.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.A01 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.671 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.A01 00:19:47.672 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.A01 00:19:47.930 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.eIC ]] 00:19:47.930 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eIC 00:19:47.930 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.930 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.930 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.930 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eIC 00:19:47.930 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eIC 00:19:48.188 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:48.188 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vc9 00:19:48.188 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.188 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.188 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.188 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vc9 00:19:48.188 05:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vc9 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.MFx ]] 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MFx 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MFx 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MFx 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5NZ 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5NZ 00:19:48.446 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5NZ 00:19:48.704 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.gcd ]] 00:19:48.704 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gcd 00:19:48.704 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.704 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.704 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.704 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gcd 00:19:48.704 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gcd 00:19:48.962 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:48.962 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uoQ 00:19:48.962 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.962 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.962 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.962 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.uoQ 00:19:48.963 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.uoQ 00:19:49.221 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:49.221 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:49.221 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.221 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.221 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.221 05:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.221 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.479 00:19:49.479 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.479 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.479 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.737 { 00:19:49.737 "cntlid": 1, 00:19:49.737 "qid": 0, 00:19:49.737 "state": "enabled", 00:19:49.737 "thread": "nvmf_tgt_poll_group_000", 00:19:49.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:49.737 "listen_address": { 00:19:49.737 "trtype": "TCP", 00:19:49.737 "adrfam": "IPv4", 00:19:49.737 "traddr": "10.0.0.2", 00:19:49.737 "trsvcid": "4420" 00:19:49.737 }, 00:19:49.737 "peer_address": { 00:19:49.737 "trtype": "TCP", 00:19:49.737 "adrfam": "IPv4", 00:19:49.737 "traddr": "10.0.0.1", 00:19:49.737 "trsvcid": "36128" 00:19:49.737 }, 00:19:49.737 "auth": { 00:19:49.737 "state": "completed", 00:19:49.737 "digest": "sha256", 00:19:49.737 "dhgroup": "null" 00:19:49.737 } 00:19:49.737 } 00:19:49.737 ]' 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.737 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.738 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.738 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.995 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.995 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.995 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.995 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:19:49.995 05:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.561 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.819 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.078 00:19:51.078 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.078 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.078 05:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.336 { 00:19:51.336 "cntlid": 3, 00:19:51.336 "qid": 0, 00:19:51.336 "state": "enabled", 00:19:51.336 "thread": "nvmf_tgt_poll_group_000", 00:19:51.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:51.336 "listen_address": { 00:19:51.336 "trtype": "TCP", 00:19:51.336 "adrfam": "IPv4", 00:19:51.336 "traddr": "10.0.0.2", 00:19:51.336 "trsvcid": "4420" 00:19:51.336 }, 00:19:51.336 "peer_address": { 00:19:51.336 "trtype": "TCP", 00:19:51.336 "adrfam": "IPv4", 00:19:51.336 "traddr": "10.0.0.1", 00:19:51.336 "trsvcid": "36150" 00:19:51.336 }, 00:19:51.336 "auth": { 00:19:51.336 "state": "completed", 00:19:51.336 "digest": "sha256", 00:19:51.336 "dhgroup": "null" 00:19:51.336 } 00:19:51.336 } 00:19:51.336 ]' 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.336 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.593 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:19:51.593 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.158 05:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.416 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.674 00:19:52.674 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.674 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.674 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.932 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.932 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.932 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.932 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.932 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.932 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.932 { 00:19:52.932 "cntlid": 5, 00:19:52.932 "qid": 0, 00:19:52.932 "state": "enabled", 00:19:52.932 "thread": "nvmf_tgt_poll_group_000", 00:19:52.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:52.932 "listen_address": { 00:19:52.932 "trtype": "TCP", 00:19:52.932 "adrfam": "IPv4", 00:19:52.932 "traddr": "10.0.0.2", 00:19:52.932 "trsvcid": "4420" 00:19:52.932 }, 00:19:52.932 "peer_address": { 00:19:52.932 "trtype": "TCP", 00:19:52.932 "adrfam": "IPv4", 00:19:52.932 "traddr": "10.0.0.1", 00:19:52.932 "trsvcid": "36184" 00:19:52.932 }, 00:19:52.932 "auth": { 00:19:52.932 "state": "completed", 00:19:52.932 "digest": "sha256", 00:19:52.932 "dhgroup": "null" 00:19:52.932 } 00:19:52.932 } 00:19:52.932 ]' 00:19:52.932 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.933 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.933 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.933 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.933 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.933 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.933 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.933 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.191 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:19:53.191 05:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.757 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.015 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.273 00:19:54.273 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.273 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.273 05:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.273 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.273 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.273 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.273 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.273 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.531 { 00:19:54.531 "cntlid": 7, 00:19:54.531 "qid": 0, 00:19:54.531 "state": "enabled", 00:19:54.531 "thread": "nvmf_tgt_poll_group_000", 00:19:54.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:54.531 "listen_address": { 00:19:54.531 "trtype": "TCP", 00:19:54.531 "adrfam": "IPv4", 00:19:54.531 "traddr": "10.0.0.2", 00:19:54.531 "trsvcid": "4420" 00:19:54.531 }, 00:19:54.531 "peer_address": { 00:19:54.531 "trtype": "TCP", 00:19:54.531 "adrfam": "IPv4", 00:19:54.531 "traddr": "10.0.0.1", 00:19:54.531 "trsvcid": "36202" 00:19:54.531 }, 00:19:54.531 "auth": { 00:19:54.531 "state": "completed", 00:19:54.531 "digest": "sha256", 00:19:54.531 "dhgroup": "null" 00:19:54.531 } 00:19:54.531 } 00:19:54.531 ]' 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.531 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.789 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:19:54.789 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:19:55.354 05:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.355 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.612 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.613 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.613 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.870 { 00:19:55.870 "cntlid": 9, 00:19:55.870 "qid": 0, 00:19:55.870 "state": "enabled", 00:19:55.870 "thread": "nvmf_tgt_poll_group_000", 00:19:55.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.870 "listen_address": { 00:19:55.870 "trtype": "TCP", 00:19:55.870 "adrfam": "IPv4", 00:19:55.870 "traddr": "10.0.0.2", 00:19:55.870 "trsvcid": "4420" 00:19:55.870 }, 00:19:55.870 "peer_address": { 00:19:55.870 "trtype": "TCP", 00:19:55.870 "adrfam": "IPv4", 00:19:55.870 "traddr": "10.0.0.1", 00:19:55.870 "trsvcid": "36220" 00:19:55.870 }, 00:19:55.870 "auth": { 00:19:55.870 "state": "completed", 00:19:55.870 "digest": "sha256", 00:19:55.870 "dhgroup": "ffdhe2048" 00:19:55.870 } 00:19:55.870 } 00:19:55.870 ]' 00:19:55.870 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.128 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.128 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.128 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.128 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.128 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.128 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.128 05:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.386 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:19:56.386 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.955 05:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.245 00:19:57.245 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.245 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.245 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.565 { 00:19:57.565 "cntlid": 11, 00:19:57.565 "qid": 0, 00:19:57.565 "state": "enabled", 00:19:57.565 "thread": "nvmf_tgt_poll_group_000", 00:19:57.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.565 "listen_address": { 00:19:57.565 "trtype": "TCP", 00:19:57.565 "adrfam": "IPv4", 00:19:57.565 "traddr": "10.0.0.2", 00:19:57.565 "trsvcid": "4420" 00:19:57.565 }, 00:19:57.565 "peer_address": { 00:19:57.565 "trtype": "TCP", 00:19:57.565 "adrfam": "IPv4", 00:19:57.565 "traddr": "10.0.0.1", 00:19:57.565 "trsvcid": "60950" 00:19:57.565 }, 00:19:57.565 "auth": { 00:19:57.565 "state": "completed", 00:19:57.565 "digest": "sha256", 00:19:57.565 "dhgroup": "ffdhe2048" 00:19:57.565 } 00:19:57.565 } 00:19:57.565 ]' 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.565 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.841 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:19:57.841 05:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.406 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.664 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.922 00:19:58.922 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.922 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.922 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.181 { 00:19:59.181 "cntlid": 13, 00:19:59.181 "qid": 0, 00:19:59.181 "state": "enabled", 00:19:59.181 "thread": "nvmf_tgt_poll_group_000", 00:19:59.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.181 "listen_address": { 00:19:59.181 "trtype": "TCP", 00:19:59.181 "adrfam": "IPv4", 00:19:59.181 "traddr": "10.0.0.2", 00:19:59.181 "trsvcid": "4420" 00:19:59.181 }, 00:19:59.181 "peer_address": { 00:19:59.181 "trtype": "TCP", 00:19:59.181 "adrfam": "IPv4", 00:19:59.181 "traddr": "10.0.0.1", 00:19:59.181 "trsvcid": "60966" 00:19:59.181 }, 00:19:59.181 "auth": { 00:19:59.181 "state": "completed", 00:19:59.181 "digest": "sha256", 00:19:59.181 "dhgroup": "ffdhe2048" 00:19:59.181 } 00:19:59.181 } 00:19:59.181 ]' 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.181 05:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.439 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:19:59.439 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.005 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.264 05:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.522 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.522 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.522 { 00:20:00.522 "cntlid": 15, 00:20:00.522 "qid": 0, 00:20:00.523 "state": "enabled", 00:20:00.523 "thread": "nvmf_tgt_poll_group_000", 00:20:00.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.523 "listen_address": { 00:20:00.523 "trtype": "TCP", 00:20:00.523 "adrfam": "IPv4", 00:20:00.523 "traddr": "10.0.0.2", 00:20:00.523 "trsvcid": "4420" 00:20:00.523 }, 00:20:00.523 "peer_address": { 00:20:00.523 "trtype": "TCP", 00:20:00.523 "adrfam": "IPv4", 00:20:00.523 "traddr": "10.0.0.1", 00:20:00.523 "trsvcid": "60980" 00:20:00.523 }, 00:20:00.523 "auth": { 00:20:00.523 "state": "completed", 00:20:00.523 "digest": "sha256", 00:20:00.523 "dhgroup": "ffdhe2048" 00:20:00.523 } 00:20:00.523 } 00:20:00.523 ]' 00:20:00.523 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.781 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.781 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.781 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.781 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.781 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.781 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.781 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.040 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:01.040 05:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.607 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.865 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.865 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.865 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.865 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.123 00:20:02.123 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.123 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.123 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.124 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.124 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.124 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.124 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.124 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.124 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.124 { 00:20:02.124 "cntlid": 17, 00:20:02.124 "qid": 0, 00:20:02.124 "state": "enabled", 00:20:02.124 "thread": "nvmf_tgt_poll_group_000", 00:20:02.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.124 "listen_address": { 00:20:02.124 "trtype": "TCP", 00:20:02.124 "adrfam": "IPv4", 00:20:02.124 "traddr": "10.0.0.2", 00:20:02.124 "trsvcid": "4420" 00:20:02.124 }, 00:20:02.124 "peer_address": { 00:20:02.124 "trtype": "TCP", 00:20:02.124 "adrfam": "IPv4", 00:20:02.124 "traddr": "10.0.0.1", 00:20:02.124 "trsvcid": "32784" 00:20:02.124 }, 00:20:02.124 "auth": { 00:20:02.124 "state": "completed", 00:20:02.124 "digest": "sha256", 00:20:02.124 "dhgroup": "ffdhe3072" 00:20:02.124 } 00:20:02.124 } 00:20:02.124 ]' 00:20:02.124 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.382 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.382 05:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.382 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.382 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.382 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.382 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.382 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.640 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:02.640 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.207 05:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.207 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.466 00:20:03.466 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.466 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.466 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.724 { 00:20:03.724 "cntlid": 19, 00:20:03.724 "qid": 0, 00:20:03.724 "state": "enabled", 00:20:03.724 "thread": "nvmf_tgt_poll_group_000", 00:20:03.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.724 "listen_address": { 00:20:03.724 "trtype": "TCP", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "10.0.0.2", 00:20:03.724 "trsvcid": "4420" 00:20:03.724 }, 00:20:03.724 "peer_address": { 00:20:03.724 "trtype": "TCP", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "10.0.0.1", 00:20:03.724 "trsvcid": "32816" 00:20:03.724 }, 00:20:03.724 "auth": { 00:20:03.724 "state": "completed", 00:20:03.724 "digest": "sha256", 00:20:03.724 "dhgroup": "ffdhe3072" 00:20:03.724 } 00:20:03.724 } 00:20:03.724 ]' 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.724 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.983 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.983 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.983 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.983 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.983 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.983 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:03.983 05:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:04.549 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.807 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.807 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.807 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.807 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.807 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.808 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.066 00:20:05.066 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.066 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.066 05:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.324 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.324 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.324 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.324 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.324 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.324 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.324 { 00:20:05.324 "cntlid": 21, 00:20:05.324 "qid": 0, 00:20:05.324 "state": "enabled", 00:20:05.324 "thread": "nvmf_tgt_poll_group_000", 00:20:05.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.324 "listen_address": { 00:20:05.324 "trtype": "TCP", 00:20:05.325 "adrfam": "IPv4", 00:20:05.325 "traddr": "10.0.0.2", 00:20:05.325 "trsvcid": "4420" 00:20:05.325 }, 00:20:05.325 "peer_address": { 00:20:05.325 "trtype": "TCP", 00:20:05.325 "adrfam": "IPv4", 00:20:05.325 "traddr": "10.0.0.1", 00:20:05.325 "trsvcid": "32828" 00:20:05.325 }, 00:20:05.325 "auth": { 00:20:05.325 "state": "completed", 00:20:05.325 "digest": "sha256", 00:20:05.325 "dhgroup": "ffdhe3072" 00:20:05.325 } 00:20:05.325 } 00:20:05.325 ]' 00:20:05.325 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.325 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.325 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.325 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.325 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.583 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.583 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.583 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.583 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:05.583 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:06.150 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.150 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.150 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.150 05:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.150 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.150 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.150 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.150 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.407 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.665 00:20:06.665 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.665 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.665 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.922 { 00:20:06.922 "cntlid": 23, 00:20:06.922 "qid": 0, 00:20:06.922 "state": "enabled", 00:20:06.922 "thread": "nvmf_tgt_poll_group_000", 00:20:06.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.922 "listen_address": { 00:20:06.922 "trtype": "TCP", 00:20:06.922 "adrfam": "IPv4", 00:20:06.922 "traddr": "10.0.0.2", 00:20:06.922 "trsvcid": "4420" 00:20:06.922 }, 00:20:06.922 "peer_address": { 00:20:06.922 "trtype": "TCP", 00:20:06.922 "adrfam": "IPv4", 00:20:06.922 "traddr": "10.0.0.1", 00:20:06.922 "trsvcid": "34112" 00:20:06.922 }, 00:20:06.922 "auth": { 00:20:06.922 "state": "completed", 00:20:06.922 "digest": "sha256", 00:20:06.922 "dhgroup": "ffdhe3072" 00:20:06.922 } 00:20:06.922 } 00:20:06.922 ]' 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.922 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.180 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.180 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.180 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.180 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:07.180 05:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:07.745 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.745 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.745 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.745 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.745 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.746 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.746 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.746 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.746 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.003 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.004 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.004 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.004 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.004 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.004 05:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.261 00:20:08.261 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.261 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.261 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.519 { 00:20:08.519 "cntlid": 25, 00:20:08.519 "qid": 0, 00:20:08.519 "state": "enabled", 00:20:08.519 "thread": "nvmf_tgt_poll_group_000", 00:20:08.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.519 "listen_address": { 00:20:08.519 "trtype": "TCP", 00:20:08.519 "adrfam": "IPv4", 00:20:08.519 "traddr": "10.0.0.2", 00:20:08.519 "trsvcid": "4420" 00:20:08.519 }, 00:20:08.519 "peer_address": { 00:20:08.519 "trtype": "TCP", 00:20:08.519 "adrfam": "IPv4", 00:20:08.519 "traddr": "10.0.0.1", 00:20:08.519 "trsvcid": "34128" 00:20:08.519 }, 00:20:08.519 "auth": { 00:20:08.519 "state": "completed", 00:20:08.519 "digest": "sha256", 00:20:08.519 "dhgroup": "ffdhe4096" 00:20:08.519 } 00:20:08.519 } 00:20:08.519 ]' 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.519 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.777 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:08.777 05:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.342 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.600 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.858 00:20:09.858 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.858 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.858 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.116 { 00:20:10.116 "cntlid": 27, 00:20:10.116 "qid": 0, 00:20:10.116 "state": "enabled", 00:20:10.116 "thread": "nvmf_tgt_poll_group_000", 00:20:10.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.116 "listen_address": { 00:20:10.116 "trtype": "TCP", 00:20:10.116 "adrfam": "IPv4", 00:20:10.116 "traddr": "10.0.0.2", 00:20:10.116 "trsvcid": "4420" 00:20:10.116 }, 00:20:10.116 "peer_address": { 00:20:10.116 "trtype": "TCP", 00:20:10.116 "adrfam": "IPv4", 00:20:10.116 "traddr": "10.0.0.1", 00:20:10.116 "trsvcid": "34156" 00:20:10.116 }, 00:20:10.116 "auth": { 00:20:10.116 "state": "completed", 00:20:10.116 "digest": "sha256", 00:20:10.116 "dhgroup": "ffdhe4096" 00:20:10.116 } 00:20:10.116 } 00:20:10.116 ]' 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.116 05:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.374 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:10.374 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.940 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.199 05:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.456 00:20:11.456 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.456 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.456 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.714 { 00:20:11.714 "cntlid": 29, 00:20:11.714 "qid": 0, 00:20:11.714 "state": "enabled", 00:20:11.714 "thread": "nvmf_tgt_poll_group_000", 00:20:11.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.714 "listen_address": { 00:20:11.714 "trtype": "TCP", 00:20:11.714 "adrfam": "IPv4", 00:20:11.714 "traddr": "10.0.0.2", 00:20:11.714 "trsvcid": "4420" 00:20:11.714 }, 00:20:11.714 "peer_address": { 00:20:11.714 "trtype": "TCP", 00:20:11.714 "adrfam": "IPv4", 00:20:11.714 "traddr": "10.0.0.1", 00:20:11.714 "trsvcid": "34186" 00:20:11.714 }, 00:20:11.714 "auth": { 00:20:11.714 "state": "completed", 00:20:11.714 "digest": "sha256", 00:20:11.714 "dhgroup": "ffdhe4096" 00:20:11.714 } 00:20:11.714 } 00:20:11.714 ]' 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.714 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.972 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:11.972 05:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.538 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.807 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.071 00:20:13.071 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.071 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.071 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.329 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.329 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.329 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.329 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.329 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.329 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.329 { 00:20:13.329 "cntlid": 31, 00:20:13.329 "qid": 0, 00:20:13.329 "state": "enabled", 00:20:13.329 "thread": "nvmf_tgt_poll_group_000", 00:20:13.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.329 "listen_address": { 00:20:13.329 "trtype": "TCP", 00:20:13.329 "adrfam": "IPv4", 00:20:13.329 "traddr": "10.0.0.2", 00:20:13.329 "trsvcid": "4420" 00:20:13.329 }, 00:20:13.329 "peer_address": { 00:20:13.329 "trtype": "TCP", 00:20:13.329 "adrfam": "IPv4", 00:20:13.329 "traddr": "10.0.0.1", 00:20:13.329 "trsvcid": "34208" 00:20:13.329 }, 00:20:13.329 "auth": { 00:20:13.329 "state": "completed", 00:20:13.329 "digest": "sha256", 00:20:13.329 "dhgroup": "ffdhe4096" 00:20:13.329 } 00:20:13.329 } 00:20:13.329 ]' 00:20:13.329 05:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.329 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.329 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.329 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.329 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.329 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.329 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.329 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.587 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:13.587 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.153 05:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.412 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.670 00:20:14.670 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.670 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.670 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.928 { 00:20:14.928 "cntlid": 33, 00:20:14.928 "qid": 0, 00:20:14.928 "state": "enabled", 00:20:14.928 "thread": "nvmf_tgt_poll_group_000", 00:20:14.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.928 "listen_address": { 00:20:14.928 "trtype": "TCP", 00:20:14.928 "adrfam": "IPv4", 00:20:14.928 "traddr": "10.0.0.2", 00:20:14.928 "trsvcid": "4420" 00:20:14.928 }, 00:20:14.928 "peer_address": { 00:20:14.928 "trtype": "TCP", 00:20:14.928 "adrfam": "IPv4", 00:20:14.928 "traddr": "10.0.0.1", 00:20:14.928 "trsvcid": "34238" 00:20:14.928 }, 00:20:14.928 "auth": { 00:20:14.928 "state": "completed", 00:20:14.928 "digest": "sha256", 00:20:14.928 "dhgroup": "ffdhe6144" 00:20:14.928 } 00:20:14.928 } 00:20:14.928 ]' 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.928 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.185 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:15.185 05:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.751 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.008 05:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.265 00:20:16.265 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.265 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.265 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.523 { 00:20:16.523 "cntlid": 35, 00:20:16.523 "qid": 0, 00:20:16.523 "state": "enabled", 00:20:16.523 "thread": "nvmf_tgt_poll_group_000", 00:20:16.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.523 "listen_address": { 00:20:16.523 "trtype": "TCP", 00:20:16.523 "adrfam": "IPv4", 00:20:16.523 "traddr": "10.0.0.2", 00:20:16.523 "trsvcid": "4420" 00:20:16.523 }, 00:20:16.523 "peer_address": { 00:20:16.523 "trtype": "TCP", 00:20:16.523 "adrfam": "IPv4", 00:20:16.523 "traddr": "10.0.0.1", 00:20:16.523 "trsvcid": "34254" 00:20:16.523 }, 00:20:16.523 "auth": { 00:20:16.523 "state": "completed", 00:20:16.523 "digest": "sha256", 00:20:16.523 "dhgroup": "ffdhe6144" 00:20:16.523 } 00:20:16.523 } 00:20:16.523 ]' 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.523 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.781 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.781 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.781 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.781 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:16.781 05:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.347 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.606 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.173 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.173 { 00:20:18.173 "cntlid": 37, 00:20:18.173 "qid": 0, 00:20:18.173 "state": "enabled", 00:20:18.173 "thread": "nvmf_tgt_poll_group_000", 00:20:18.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.173 "listen_address": { 00:20:18.173 "trtype": "TCP", 00:20:18.173 "adrfam": "IPv4", 00:20:18.173 "traddr": "10.0.0.2", 00:20:18.173 "trsvcid": "4420" 00:20:18.173 }, 00:20:18.173 "peer_address": { 00:20:18.173 "trtype": "TCP", 00:20:18.173 "adrfam": "IPv4", 00:20:18.173 "traddr": "10.0.0.1", 00:20:18.173 "trsvcid": "50904" 00:20:18.173 }, 00:20:18.173 "auth": { 00:20:18.173 "state": "completed", 00:20:18.173 "digest": "sha256", 00:20:18.173 "dhgroup": "ffdhe6144" 00:20:18.173 } 00:20:18.173 } 00:20:18.173 ]' 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.173 05:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.173 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.432 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.432 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.432 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.432 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.432 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.432 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:18.432 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:18.998 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.998 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.998 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.998 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.257 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.257 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.257 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.257 05:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.257 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.824 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.824 { 00:20:19.824 "cntlid": 39, 00:20:19.824 "qid": 0, 00:20:19.824 "state": "enabled", 00:20:19.824 "thread": "nvmf_tgt_poll_group_000", 00:20:19.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.824 "listen_address": { 00:20:19.824 "trtype": "TCP", 00:20:19.824 "adrfam": "IPv4", 00:20:19.824 "traddr": "10.0.0.2", 00:20:19.824 "trsvcid": "4420" 00:20:19.824 }, 00:20:19.824 "peer_address": { 00:20:19.824 "trtype": "TCP", 00:20:19.824 "adrfam": "IPv4", 00:20:19.824 "traddr": "10.0.0.1", 00:20:19.824 "trsvcid": "50930" 00:20:19.824 }, 00:20:19.824 "auth": { 00:20:19.824 "state": "completed", 00:20:19.824 "digest": "sha256", 00:20:19.824 "dhgroup": "ffdhe6144" 00:20:19.824 } 00:20:19.824 } 00:20:19.824 ]' 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.824 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.083 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.083 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.083 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.083 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.083 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.341 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:20.341 05:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:20.908 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.908 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.908 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.908 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.908 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.908 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.908 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.909 05:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.475 00:20:21.475 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.475 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.475 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.732 { 00:20:21.732 "cntlid": 41, 00:20:21.732 "qid": 0, 00:20:21.732 "state": "enabled", 00:20:21.732 "thread": "nvmf_tgt_poll_group_000", 00:20:21.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.732 "listen_address": { 00:20:21.732 "trtype": "TCP", 00:20:21.732 "adrfam": "IPv4", 00:20:21.732 "traddr": "10.0.0.2", 00:20:21.732 "trsvcid": "4420" 00:20:21.732 }, 00:20:21.732 "peer_address": { 00:20:21.732 "trtype": "TCP", 00:20:21.732 "adrfam": "IPv4", 00:20:21.732 "traddr": "10.0.0.1", 00:20:21.732 "trsvcid": "50966" 00:20:21.732 }, 00:20:21.732 "auth": { 00:20:21.732 "state": "completed", 00:20:21.732 "digest": "sha256", 00:20:21.732 "dhgroup": "ffdhe8192" 00:20:21.732 } 00:20:21.732 } 00:20:21.732 ]' 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.732 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.990 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:21.990 05:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.557 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.815 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.816 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.816 05:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.382 00:20:23.382 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.382 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.382 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.641 { 00:20:23.641 "cntlid": 43, 00:20:23.641 "qid": 0, 00:20:23.641 "state": "enabled", 00:20:23.641 "thread": "nvmf_tgt_poll_group_000", 00:20:23.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.641 "listen_address": { 00:20:23.641 "trtype": "TCP", 00:20:23.641 "adrfam": "IPv4", 00:20:23.641 "traddr": "10.0.0.2", 00:20:23.641 "trsvcid": "4420" 00:20:23.641 }, 00:20:23.641 "peer_address": { 00:20:23.641 "trtype": "TCP", 00:20:23.641 "adrfam": "IPv4", 00:20:23.641 "traddr": "10.0.0.1", 00:20:23.641 "trsvcid": "50998" 00:20:23.641 }, 00:20:23.641 "auth": { 00:20:23.641 "state": "completed", 00:20:23.641 "digest": "sha256", 00:20:23.641 "dhgroup": "ffdhe8192" 00:20:23.641 } 00:20:23.641 } 00:20:23.641 ]' 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.641 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.899 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:23.899 05:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.466 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.724 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.291 00:20:25.291 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.291 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.291 05:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.291 { 00:20:25.291 "cntlid": 45, 00:20:25.291 "qid": 0, 00:20:25.291 "state": "enabled", 00:20:25.291 "thread": "nvmf_tgt_poll_group_000", 00:20:25.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.291 "listen_address": { 00:20:25.291 "trtype": "TCP", 00:20:25.291 "adrfam": "IPv4", 00:20:25.291 "traddr": "10.0.0.2", 00:20:25.291 "trsvcid": "4420" 00:20:25.291 }, 00:20:25.291 "peer_address": { 00:20:25.291 "trtype": "TCP", 00:20:25.291 "adrfam": "IPv4", 00:20:25.291 "traddr": "10.0.0.1", 00:20:25.291 "trsvcid": "51024" 00:20:25.291 }, 00:20:25.291 "auth": { 00:20:25.291 "state": "completed", 00:20:25.291 "digest": "sha256", 00:20:25.291 "dhgroup": "ffdhe8192" 00:20:25.291 } 00:20:25.291 } 00:20:25.291 ]' 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.291 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.549 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.549 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.549 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.549 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:25.549 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.116 05:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.374 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.941 00:20:26.941 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.941 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.941 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.199 { 00:20:27.199 "cntlid": 47, 00:20:27.199 "qid": 0, 00:20:27.199 "state": "enabled", 00:20:27.199 "thread": "nvmf_tgt_poll_group_000", 00:20:27.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.199 "listen_address": { 00:20:27.199 "trtype": "TCP", 00:20:27.199 "adrfam": "IPv4", 00:20:27.199 "traddr": "10.0.0.2", 00:20:27.199 "trsvcid": "4420" 00:20:27.199 }, 00:20:27.199 "peer_address": { 00:20:27.199 "trtype": "TCP", 00:20:27.199 "adrfam": "IPv4", 00:20:27.199 "traddr": "10.0.0.1", 00:20:27.199 "trsvcid": "58176" 00:20:27.199 }, 00:20:27.199 "auth": { 00:20:27.199 "state": "completed", 00:20:27.199 "digest": "sha256", 00:20:27.199 "dhgroup": "ffdhe8192" 00:20:27.199 } 00:20:27.199 } 00:20:27.199 ]' 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.199 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.200 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.200 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.200 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.200 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.200 05:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.458 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:27.458 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:28.024 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.024 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.024 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.025 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.025 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.025 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:28.025 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.025 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.025 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.025 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.283 05:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.541 00:20:28.541 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.541 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.541 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.800 { 00:20:28.800 "cntlid": 49, 00:20:28.800 "qid": 0, 00:20:28.800 "state": "enabled", 00:20:28.800 "thread": "nvmf_tgt_poll_group_000", 00:20:28.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.800 "listen_address": { 00:20:28.800 "trtype": "TCP", 00:20:28.800 "adrfam": "IPv4", 00:20:28.800 "traddr": "10.0.0.2", 00:20:28.800 "trsvcid": "4420" 00:20:28.800 }, 00:20:28.800 "peer_address": { 00:20:28.800 "trtype": "TCP", 00:20:28.800 "adrfam": "IPv4", 00:20:28.800 "traddr": "10.0.0.1", 00:20:28.800 "trsvcid": "58210" 00:20:28.800 }, 00:20:28.800 "auth": { 00:20:28.800 "state": "completed", 00:20:28.800 "digest": "sha384", 00:20:28.800 "dhgroup": "null" 00:20:28.800 } 00:20:28.800 } 00:20:28.800 ]' 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.800 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.058 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:29.058 05:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.625 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.883 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.142 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.142 { 00:20:30.142 "cntlid": 51, 00:20:30.142 "qid": 0, 00:20:30.142 "state": "enabled", 00:20:30.142 "thread": "nvmf_tgt_poll_group_000", 00:20:30.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.142 "listen_address": { 00:20:30.142 "trtype": "TCP", 00:20:30.142 "adrfam": "IPv4", 00:20:30.142 "traddr": "10.0.0.2", 00:20:30.142 "trsvcid": "4420" 00:20:30.142 }, 00:20:30.142 "peer_address": { 00:20:30.142 "trtype": "TCP", 00:20:30.142 "adrfam": "IPv4", 00:20:30.142 "traddr": "10.0.0.1", 00:20:30.142 "trsvcid": "58232" 00:20:30.142 }, 00:20:30.142 "auth": { 00:20:30.142 "state": "completed", 00:20:30.142 "digest": "sha384", 00:20:30.142 "dhgroup": "null" 00:20:30.142 } 00:20:30.142 } 00:20:30.142 ]' 00:20:30.142 05:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.401 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.401 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.401 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:30.401 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.401 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.401 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.401 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.660 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:30.660 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.227 05:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.227 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.485 00:20:31.485 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.485 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.485 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.743 { 00:20:31.743 "cntlid": 53, 00:20:31.743 "qid": 0, 00:20:31.743 "state": "enabled", 00:20:31.743 "thread": "nvmf_tgt_poll_group_000", 00:20:31.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.743 "listen_address": { 00:20:31.743 "trtype": "TCP", 00:20:31.743 "adrfam": "IPv4", 00:20:31.743 "traddr": "10.0.0.2", 00:20:31.743 "trsvcid": "4420" 00:20:31.743 }, 00:20:31.743 "peer_address": { 00:20:31.743 "trtype": "TCP", 00:20:31.743 "adrfam": "IPv4", 00:20:31.743 "traddr": "10.0.0.1", 00:20:31.743 "trsvcid": "58264" 00:20:31.743 }, 00:20:31.743 "auth": { 00:20:31.743 "state": "completed", 00:20:31.743 "digest": "sha384", 00:20:31.743 "dhgroup": "null" 00:20:31.743 } 00:20:31.743 } 00:20:31.743 ]' 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:31.743 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.002 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.002 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.002 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.002 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:32.002 05:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:32.568 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.568 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.568 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.568 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.568 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.568 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.568 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.569 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.827 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.086 00:20:33.086 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.086 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.086 05:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.344 { 00:20:33.344 "cntlid": 55, 00:20:33.344 "qid": 0, 00:20:33.344 "state": "enabled", 00:20:33.344 "thread": "nvmf_tgt_poll_group_000", 00:20:33.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.344 "listen_address": { 00:20:33.344 "trtype": "TCP", 00:20:33.344 "adrfam": "IPv4", 00:20:33.344 "traddr": "10.0.0.2", 00:20:33.344 "trsvcid": "4420" 00:20:33.344 }, 00:20:33.344 "peer_address": { 00:20:33.344 "trtype": "TCP", 00:20:33.344 "adrfam": "IPv4", 00:20:33.344 "traddr": "10.0.0.1", 00:20:33.344 "trsvcid": "58278" 00:20:33.344 }, 00:20:33.344 "auth": { 00:20:33.344 "state": "completed", 00:20:33.344 "digest": "sha384", 00:20:33.344 "dhgroup": "null" 00:20:33.344 } 00:20:33.344 } 00:20:33.344 ]' 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.344 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.602 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:33.602 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.169 05:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.428 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.686 00:20:34.686 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.686 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.686 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.970 { 00:20:34.970 "cntlid": 57, 00:20:34.970 "qid": 0, 00:20:34.970 "state": "enabled", 00:20:34.970 "thread": "nvmf_tgt_poll_group_000", 00:20:34.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.970 "listen_address": { 00:20:34.970 "trtype": "TCP", 00:20:34.970 "adrfam": "IPv4", 00:20:34.970 "traddr": "10.0.0.2", 00:20:34.970 "trsvcid": "4420" 00:20:34.970 }, 00:20:34.970 "peer_address": { 00:20:34.970 "trtype": "TCP", 00:20:34.970 "adrfam": "IPv4", 00:20:34.970 "traddr": "10.0.0.1", 00:20:34.970 "trsvcid": "58298" 00:20:34.970 }, 00:20:34.970 "auth": { 00:20:34.970 "state": "completed", 00:20:34.970 "digest": "sha384", 00:20:34.970 "dhgroup": "ffdhe2048" 00:20:34.970 } 00:20:34.970 } 00:20:34.970 ]' 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.970 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.293 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:35.293 05:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.879 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.137 00:20:36.137 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.137 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.137 05:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.396 { 00:20:36.396 "cntlid": 59, 00:20:36.396 "qid": 0, 00:20:36.396 "state": "enabled", 00:20:36.396 "thread": "nvmf_tgt_poll_group_000", 00:20:36.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.396 "listen_address": { 00:20:36.396 "trtype": "TCP", 00:20:36.396 "adrfam": "IPv4", 00:20:36.396 "traddr": "10.0.0.2", 00:20:36.396 "trsvcid": "4420" 00:20:36.396 }, 00:20:36.396 "peer_address": { 00:20:36.396 "trtype": "TCP", 00:20:36.396 "adrfam": "IPv4", 00:20:36.396 "traddr": "10.0.0.1", 00:20:36.396 "trsvcid": "58326" 00:20:36.396 }, 00:20:36.396 "auth": { 00:20:36.396 "state": "completed", 00:20:36.396 "digest": "sha384", 00:20:36.396 "dhgroup": "ffdhe2048" 00:20:36.396 } 00:20:36.396 } 00:20:36.396 ]' 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.396 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.655 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.655 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.655 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.655 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.655 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.913 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:36.913 05:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.480 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.738 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.738 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.997 { 00:20:37.997 "cntlid": 61, 00:20:37.997 "qid": 0, 00:20:37.997 "state": "enabled", 00:20:37.997 "thread": "nvmf_tgt_poll_group_000", 00:20:37.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.997 "listen_address": { 00:20:37.997 "trtype": "TCP", 00:20:37.997 "adrfam": "IPv4", 00:20:37.997 "traddr": "10.0.0.2", 00:20:37.997 "trsvcid": "4420" 00:20:37.997 }, 00:20:37.997 "peer_address": { 00:20:37.997 "trtype": "TCP", 00:20:37.997 "adrfam": "IPv4", 00:20:37.997 "traddr": "10.0.0.1", 00:20:37.997 "trsvcid": "34214" 00:20:37.997 }, 00:20:37.997 "auth": { 00:20:37.997 "state": "completed", 00:20:37.997 "digest": "sha384", 00:20:37.997 "dhgroup": "ffdhe2048" 00:20:37.997 } 00:20:37.997 } 00:20:37.997 ]' 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.997 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.256 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.256 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.256 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.256 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.256 05:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.256 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:38.256 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.191 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.192 05:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.450 00:20:39.450 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.450 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.450 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.709 { 00:20:39.709 "cntlid": 63, 00:20:39.709 "qid": 0, 00:20:39.709 "state": "enabled", 00:20:39.709 "thread": "nvmf_tgt_poll_group_000", 00:20:39.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.709 "listen_address": { 00:20:39.709 "trtype": "TCP", 00:20:39.709 "adrfam": "IPv4", 00:20:39.709 "traddr": "10.0.0.2", 00:20:39.709 "trsvcid": "4420" 00:20:39.709 }, 00:20:39.709 "peer_address": { 00:20:39.709 "trtype": "TCP", 00:20:39.709 "adrfam": "IPv4", 00:20:39.709 "traddr": "10.0.0.1", 00:20:39.709 "trsvcid": "34246" 00:20:39.709 }, 00:20:39.709 "auth": { 00:20:39.709 "state": "completed", 00:20:39.709 "digest": "sha384", 00:20:39.709 "dhgroup": "ffdhe2048" 00:20:39.709 } 00:20:39.709 } 00:20:39.709 ]' 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.709 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.968 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:39.968 05:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.534 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.793 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.052 00:20:41.052 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.052 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.052 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.310 { 00:20:41.310 "cntlid": 65, 00:20:41.310 "qid": 0, 00:20:41.310 "state": "enabled", 00:20:41.310 "thread": "nvmf_tgt_poll_group_000", 00:20:41.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.310 "listen_address": { 00:20:41.310 "trtype": "TCP", 00:20:41.310 "adrfam": "IPv4", 00:20:41.310 "traddr": "10.0.0.2", 00:20:41.310 "trsvcid": "4420" 00:20:41.310 }, 00:20:41.310 "peer_address": { 00:20:41.310 "trtype": "TCP", 00:20:41.310 "adrfam": "IPv4", 00:20:41.310 "traddr": "10.0.0.1", 00:20:41.310 "trsvcid": "34268" 00:20:41.310 }, 00:20:41.310 "auth": { 00:20:41.310 "state": "completed", 00:20:41.310 "digest": "sha384", 00:20:41.310 "dhgroup": "ffdhe3072" 00:20:41.310 } 00:20:41.310 } 00:20:41.310 ]' 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.310 05:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.310 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.311 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.311 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.311 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.311 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.569 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:41.569 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.135 05:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.393 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.651 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.651 { 00:20:42.651 "cntlid": 67, 00:20:42.651 "qid": 0, 00:20:42.651 "state": "enabled", 00:20:42.651 "thread": "nvmf_tgt_poll_group_000", 00:20:42.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.651 "listen_address": { 00:20:42.651 "trtype": "TCP", 00:20:42.651 "adrfam": "IPv4", 00:20:42.651 "traddr": "10.0.0.2", 00:20:42.651 "trsvcid": "4420" 00:20:42.651 }, 00:20:42.651 "peer_address": { 00:20:42.651 "trtype": "TCP", 00:20:42.651 "adrfam": "IPv4", 00:20:42.651 "traddr": "10.0.0.1", 00:20:42.651 "trsvcid": "34290" 00:20:42.651 }, 00:20:42.651 "auth": { 00:20:42.651 "state": "completed", 00:20:42.651 "digest": "sha384", 00:20:42.651 "dhgroup": "ffdhe3072" 00:20:42.651 } 00:20:42.651 } 00:20:42.651 ]' 00:20:42.651 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.910 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.910 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.910 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.910 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.910 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.910 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.910 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.168 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:43.168 05:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.735 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.993 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.993 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.993 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.993 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.994 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.994 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.994 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.994 00:20:44.252 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.252 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.252 05:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.252 { 00:20:44.252 "cntlid": 69, 00:20:44.252 "qid": 0, 00:20:44.252 "state": "enabled", 00:20:44.252 "thread": "nvmf_tgt_poll_group_000", 00:20:44.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.252 "listen_address": { 00:20:44.252 "trtype": "TCP", 00:20:44.252 "adrfam": "IPv4", 00:20:44.252 "traddr": "10.0.0.2", 00:20:44.252 "trsvcid": "4420" 00:20:44.252 }, 00:20:44.252 "peer_address": { 00:20:44.252 "trtype": "TCP", 00:20:44.252 "adrfam": "IPv4", 00:20:44.252 "traddr": "10.0.0.1", 00:20:44.252 "trsvcid": "34318" 00:20:44.252 }, 00:20:44.252 "auth": { 00:20:44.252 "state": "completed", 00:20:44.252 "digest": "sha384", 00:20:44.252 "dhgroup": "ffdhe3072" 00:20:44.252 } 00:20:44.252 } 00:20:44.252 ]' 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.252 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.511 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.511 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.511 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.511 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.511 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.769 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:44.769 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.336 05:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.336 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.595 00:20:45.595 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.595 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.595 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.853 { 00:20:45.853 "cntlid": 71, 00:20:45.853 "qid": 0, 00:20:45.853 "state": "enabled", 00:20:45.853 "thread": "nvmf_tgt_poll_group_000", 00:20:45.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.853 "listen_address": { 00:20:45.853 "trtype": "TCP", 00:20:45.853 "adrfam": "IPv4", 00:20:45.853 "traddr": "10.0.0.2", 00:20:45.853 "trsvcid": "4420" 00:20:45.853 }, 00:20:45.853 "peer_address": { 00:20:45.853 "trtype": "TCP", 00:20:45.853 "adrfam": "IPv4", 00:20:45.853 "traddr": "10.0.0.1", 00:20:45.853 "trsvcid": "34348" 00:20:45.853 }, 00:20:45.853 "auth": { 00:20:45.853 "state": "completed", 00:20:45.853 "digest": "sha384", 00:20:45.853 "dhgroup": "ffdhe3072" 00:20:45.853 } 00:20:45.853 } 00:20:45.853 ]' 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.853 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.112 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.112 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.112 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.112 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.112 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.112 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:46.112 05:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:46.677 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.935 05:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.193 00:20:47.193 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.193 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.193 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.450 { 00:20:47.450 "cntlid": 73, 00:20:47.450 "qid": 0, 00:20:47.450 "state": "enabled", 00:20:47.450 "thread": "nvmf_tgt_poll_group_000", 00:20:47.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.450 "listen_address": { 00:20:47.450 "trtype": "TCP", 00:20:47.450 "adrfam": "IPv4", 00:20:47.450 "traddr": "10.0.0.2", 00:20:47.450 "trsvcid": "4420" 00:20:47.450 }, 00:20:47.450 "peer_address": { 00:20:47.450 "trtype": "TCP", 00:20:47.450 "adrfam": "IPv4", 00:20:47.450 "traddr": "10.0.0.1", 00:20:47.450 "trsvcid": "56202" 00:20:47.450 }, 00:20:47.450 "auth": { 00:20:47.450 "state": "completed", 00:20:47.450 "digest": "sha384", 00:20:47.450 "dhgroup": "ffdhe4096" 00:20:47.450 } 00:20:47.450 } 00:20:47.450 ]' 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.450 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.708 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.708 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.708 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.708 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.708 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.708 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:47.708 05:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.273 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.531 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.789 00:20:48.789 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.789 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.789 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.046 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.046 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.046 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.046 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.046 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.046 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.046 { 00:20:49.046 "cntlid": 75, 00:20:49.046 "qid": 0, 00:20:49.046 "state": "enabled", 00:20:49.046 "thread": "nvmf_tgt_poll_group_000", 00:20:49.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.046 "listen_address": { 00:20:49.046 "trtype": "TCP", 00:20:49.046 "adrfam": "IPv4", 00:20:49.047 "traddr": "10.0.0.2", 00:20:49.047 "trsvcid": "4420" 00:20:49.047 }, 00:20:49.047 "peer_address": { 00:20:49.047 "trtype": "TCP", 00:20:49.047 "adrfam": "IPv4", 00:20:49.047 "traddr": "10.0.0.1", 00:20:49.047 "trsvcid": "56238" 00:20:49.047 }, 00:20:49.047 "auth": { 00:20:49.047 "state": "completed", 00:20:49.047 "digest": "sha384", 00:20:49.047 "dhgroup": "ffdhe4096" 00:20:49.047 } 00:20:49.047 } 00:20:49.047 ]' 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.047 05:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.304 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:49.304 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:49.869 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.869 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.869 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.869 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.869 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.869 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.869 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.870 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.127 05:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.385 00:20:50.385 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.385 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.385 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.643 { 00:20:50.643 "cntlid": 77, 00:20:50.643 "qid": 0, 00:20:50.643 "state": "enabled", 00:20:50.643 "thread": "nvmf_tgt_poll_group_000", 00:20:50.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.643 "listen_address": { 00:20:50.643 "trtype": "TCP", 00:20:50.643 "adrfam": "IPv4", 00:20:50.643 "traddr": "10.0.0.2", 00:20:50.643 "trsvcid": "4420" 00:20:50.643 }, 00:20:50.643 "peer_address": { 00:20:50.643 "trtype": "TCP", 00:20:50.643 "adrfam": "IPv4", 00:20:50.643 "traddr": "10.0.0.1", 00:20:50.643 "trsvcid": "56254" 00:20:50.643 }, 00:20:50.643 "auth": { 00:20:50.643 "state": "completed", 00:20:50.643 "digest": "sha384", 00:20:50.643 "dhgroup": "ffdhe4096" 00:20:50.643 } 00:20:50.643 } 00:20:50.643 ]' 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.643 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.900 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:50.900 05:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.464 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.720 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.978 00:20:51.978 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.978 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.978 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.236 { 00:20:52.236 "cntlid": 79, 00:20:52.236 "qid": 0, 00:20:52.236 "state": "enabled", 00:20:52.236 "thread": "nvmf_tgt_poll_group_000", 00:20:52.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.236 "listen_address": { 00:20:52.236 "trtype": "TCP", 00:20:52.236 "adrfam": "IPv4", 00:20:52.236 "traddr": "10.0.0.2", 00:20:52.236 "trsvcid": "4420" 00:20:52.236 }, 00:20:52.236 "peer_address": { 00:20:52.236 "trtype": "TCP", 00:20:52.236 "adrfam": "IPv4", 00:20:52.236 "traddr": "10.0.0.1", 00:20:52.236 "trsvcid": "56290" 00:20:52.236 }, 00:20:52.236 "auth": { 00:20:52.236 "state": "completed", 00:20:52.236 "digest": "sha384", 00:20:52.236 "dhgroup": "ffdhe4096" 00:20:52.236 } 00:20:52.236 } 00:20:52.236 ]' 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.236 05:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.236 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.236 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.236 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.236 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.236 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.493 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:52.493 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.059 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.317 05:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.317 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.317 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.317 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.317 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.575 00:20:53.575 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.575 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.575 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.833 { 00:20:53.833 "cntlid": 81, 00:20:53.833 "qid": 0, 00:20:53.833 "state": "enabled", 00:20:53.833 "thread": "nvmf_tgt_poll_group_000", 00:20:53.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.833 "listen_address": { 00:20:53.833 "trtype": "TCP", 00:20:53.833 "adrfam": "IPv4", 00:20:53.833 "traddr": "10.0.0.2", 00:20:53.833 "trsvcid": "4420" 00:20:53.833 }, 00:20:53.833 "peer_address": { 00:20:53.833 "trtype": "TCP", 00:20:53.833 "adrfam": "IPv4", 00:20:53.833 "traddr": "10.0.0.1", 00:20:53.833 "trsvcid": "56322" 00:20:53.833 }, 00:20:53.833 "auth": { 00:20:53.833 "state": "completed", 00:20:53.833 "digest": "sha384", 00:20:53.833 "dhgroup": "ffdhe6144" 00:20:53.833 } 00:20:53.833 } 00:20:53.833 ]' 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.833 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.091 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:54.091 05:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.656 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.914 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.171 00:20:55.172 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.172 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.172 05:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.429 { 00:20:55.429 "cntlid": 83, 00:20:55.429 "qid": 0, 00:20:55.429 "state": "enabled", 00:20:55.429 "thread": "nvmf_tgt_poll_group_000", 00:20:55.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.429 "listen_address": { 00:20:55.429 "trtype": "TCP", 00:20:55.429 "adrfam": "IPv4", 00:20:55.429 "traddr": "10.0.0.2", 00:20:55.429 "trsvcid": "4420" 00:20:55.429 }, 00:20:55.429 "peer_address": { 00:20:55.429 "trtype": "TCP", 00:20:55.429 "adrfam": "IPv4", 00:20:55.429 "traddr": "10.0.0.1", 00:20:55.429 "trsvcid": "56356" 00:20:55.429 }, 00:20:55.429 "auth": { 00:20:55.429 "state": "completed", 00:20:55.429 "digest": "sha384", 00:20:55.429 "dhgroup": "ffdhe6144" 00:20:55.429 } 00:20:55.429 } 00:20:55.429 ]' 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.429 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.687 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.687 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.687 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.687 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:55.687 05:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.251 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.508 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.509 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.766 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.024 { 00:20:57.024 "cntlid": 85, 00:20:57.024 "qid": 0, 00:20:57.024 "state": "enabled", 00:20:57.024 "thread": "nvmf_tgt_poll_group_000", 00:20:57.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.024 "listen_address": { 00:20:57.024 "trtype": "TCP", 00:20:57.024 "adrfam": "IPv4", 00:20:57.024 "traddr": "10.0.0.2", 00:20:57.024 "trsvcid": "4420" 00:20:57.024 }, 00:20:57.024 "peer_address": { 00:20:57.024 "trtype": "TCP", 00:20:57.024 "adrfam": "IPv4", 00:20:57.024 "traddr": "10.0.0.1", 00:20:57.024 "trsvcid": "50750" 00:20:57.024 }, 00:20:57.024 "auth": { 00:20:57.024 "state": "completed", 00:20:57.024 "digest": "sha384", 00:20:57.024 "dhgroup": "ffdhe6144" 00:20:57.024 } 00:20:57.024 } 00:20:57.024 ]' 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.024 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.282 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.282 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.282 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.282 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.282 05:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.539 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:57.539 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.105 05:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.670 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.670 { 00:20:58.670 "cntlid": 87, 00:20:58.670 "qid": 0, 00:20:58.670 "state": "enabled", 00:20:58.670 "thread": "nvmf_tgt_poll_group_000", 00:20:58.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.670 "listen_address": { 00:20:58.670 "trtype": "TCP", 00:20:58.670 "adrfam": "IPv4", 00:20:58.670 "traddr": "10.0.0.2", 00:20:58.670 "trsvcid": "4420" 00:20:58.670 }, 00:20:58.670 "peer_address": { 00:20:58.670 "trtype": "TCP", 00:20:58.670 "adrfam": "IPv4", 00:20:58.670 "traddr": "10.0.0.1", 00:20:58.670 "trsvcid": "50780" 00:20:58.670 }, 00:20:58.670 "auth": { 00:20:58.670 "state": "completed", 00:20:58.670 "digest": "sha384", 00:20:58.670 "dhgroup": "ffdhe6144" 00:20:58.670 } 00:20:58.670 } 00:20:58.670 ]' 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.670 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.928 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.928 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.928 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.928 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.928 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.928 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:58.928 05:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:20:59.492 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.492 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.492 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.492 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.750 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.315 00:21:00.315 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.315 05:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.315 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.572 { 00:21:00.572 "cntlid": 89, 00:21:00.572 "qid": 0, 00:21:00.572 "state": "enabled", 00:21:00.572 "thread": "nvmf_tgt_poll_group_000", 00:21:00.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.572 "listen_address": { 00:21:00.572 "trtype": "TCP", 00:21:00.572 "adrfam": "IPv4", 00:21:00.572 "traddr": "10.0.0.2", 00:21:00.572 "trsvcid": "4420" 00:21:00.572 }, 00:21:00.572 "peer_address": { 00:21:00.572 "trtype": "TCP", 00:21:00.572 "adrfam": "IPv4", 00:21:00.572 "traddr": "10.0.0.1", 00:21:00.572 "trsvcid": "50816" 00:21:00.572 }, 00:21:00.572 "auth": { 00:21:00.572 "state": "completed", 00:21:00.572 "digest": "sha384", 00:21:00.572 "dhgroup": "ffdhe8192" 00:21:00.572 } 00:21:00.572 } 00:21:00.572 ]' 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.572 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.830 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:00.830 05:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.393 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.649 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.214 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.214 05:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.214 { 00:21:02.214 "cntlid": 91, 00:21:02.214 "qid": 0, 00:21:02.214 "state": "enabled", 00:21:02.214 "thread": "nvmf_tgt_poll_group_000", 00:21:02.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.214 "listen_address": { 00:21:02.214 "trtype": "TCP", 00:21:02.214 "adrfam": "IPv4", 00:21:02.214 "traddr": "10.0.0.2", 00:21:02.214 "trsvcid": "4420" 00:21:02.214 }, 00:21:02.214 "peer_address": { 00:21:02.214 "trtype": "TCP", 00:21:02.214 "adrfam": "IPv4", 00:21:02.214 "traddr": "10.0.0.1", 00:21:02.214 "trsvcid": "50854" 00:21:02.214 }, 00:21:02.214 "auth": { 00:21:02.214 "state": "completed", 00:21:02.214 "digest": "sha384", 00:21:02.214 "dhgroup": "ffdhe8192" 00:21:02.214 } 00:21:02.214 } 00:21:02.214 ]' 00:21:02.214 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.214 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.214 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.472 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.472 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.472 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.472 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.472 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.472 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:02.472 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.038 05:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.296 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.862 00:21:03.862 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.862 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.862 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.119 { 00:21:04.119 "cntlid": 93, 00:21:04.119 "qid": 0, 00:21:04.119 "state": "enabled", 00:21:04.119 "thread": "nvmf_tgt_poll_group_000", 00:21:04.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.119 "listen_address": { 00:21:04.119 "trtype": "TCP", 00:21:04.119 "adrfam": "IPv4", 00:21:04.119 "traddr": "10.0.0.2", 00:21:04.119 "trsvcid": "4420" 00:21:04.119 }, 00:21:04.119 "peer_address": { 00:21:04.119 "trtype": "TCP", 00:21:04.119 "adrfam": "IPv4", 00:21:04.119 "traddr": "10.0.0.1", 00:21:04.119 "trsvcid": "50870" 00:21:04.119 }, 00:21:04.119 "auth": { 00:21:04.119 "state": "completed", 00:21:04.119 "digest": "sha384", 00:21:04.119 "dhgroup": "ffdhe8192" 00:21:04.119 } 00:21:04.119 } 00:21:04.119 ]' 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.119 05:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.377 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:04.377 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.942 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.199 05:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.765 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.765 { 00:21:05.765 "cntlid": 95, 00:21:05.765 "qid": 0, 00:21:05.765 "state": "enabled", 00:21:05.765 "thread": "nvmf_tgt_poll_group_000", 00:21:05.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.765 "listen_address": { 00:21:05.765 "trtype": "TCP", 00:21:05.765 "adrfam": "IPv4", 00:21:05.765 "traddr": "10.0.0.2", 00:21:05.765 "trsvcid": "4420" 00:21:05.765 }, 00:21:05.765 "peer_address": { 00:21:05.765 "trtype": "TCP", 00:21:05.765 "adrfam": "IPv4", 00:21:05.765 "traddr": "10.0.0.1", 00:21:05.765 "trsvcid": "50906" 00:21:05.765 }, 00:21:05.765 "auth": { 00:21:05.765 "state": "completed", 00:21:05.765 "digest": "sha384", 00:21:05.765 "dhgroup": "ffdhe8192" 00:21:05.765 } 00:21:05.765 } 00:21:05.765 ]' 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.765 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.022 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.022 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.022 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.022 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:06.022 05:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.588 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.846 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.103 00:21:07.103 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.103 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.103 05:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.360 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.360 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.360 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.360 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.360 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.360 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.360 { 00:21:07.360 "cntlid": 97, 00:21:07.360 "qid": 0, 00:21:07.360 "state": "enabled", 00:21:07.360 "thread": "nvmf_tgt_poll_group_000", 00:21:07.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.360 "listen_address": { 00:21:07.360 "trtype": "TCP", 00:21:07.360 "adrfam": "IPv4", 00:21:07.360 "traddr": "10.0.0.2", 00:21:07.360 "trsvcid": "4420" 00:21:07.360 }, 00:21:07.360 "peer_address": { 00:21:07.360 "trtype": "TCP", 00:21:07.360 "adrfam": "IPv4", 00:21:07.360 "traddr": "10.0.0.1", 00:21:07.360 "trsvcid": "41116" 00:21:07.361 }, 00:21:07.361 "auth": { 00:21:07.361 "state": "completed", 00:21:07.361 "digest": "sha512", 00:21:07.361 "dhgroup": "null" 00:21:07.361 } 00:21:07.361 } 00:21:07.361 ]' 00:21:07.361 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.361 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.361 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.361 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:07.361 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.618 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.618 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.618 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.618 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:07.618 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:08.183 05:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.183 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.184 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.184 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.184 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.184 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.184 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.184 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.441 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.699 00:21:08.699 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.699 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.699 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.957 { 00:21:08.957 "cntlid": 99, 00:21:08.957 "qid": 0, 00:21:08.957 "state": "enabled", 00:21:08.957 "thread": "nvmf_tgt_poll_group_000", 00:21:08.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.957 "listen_address": { 00:21:08.957 "trtype": "TCP", 00:21:08.957 "adrfam": "IPv4", 00:21:08.957 "traddr": "10.0.0.2", 00:21:08.957 "trsvcid": "4420" 00:21:08.957 }, 00:21:08.957 "peer_address": { 00:21:08.957 "trtype": "TCP", 00:21:08.957 "adrfam": "IPv4", 00:21:08.957 "traddr": "10.0.0.1", 00:21:08.957 "trsvcid": "41148" 00:21:08.957 }, 00:21:08.957 "auth": { 00:21:08.957 "state": "completed", 00:21:08.957 "digest": "sha512", 00:21:08.957 "dhgroup": "null" 00:21:08.957 } 00:21:08.957 } 00:21:08.957 ]' 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.957 05:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.214 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:09.214 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.780 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.038 05:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.295 00:21:10.295 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.295 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.295 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.553 { 00:21:10.553 "cntlid": 101, 00:21:10.553 "qid": 0, 00:21:10.553 "state": "enabled", 00:21:10.553 "thread": "nvmf_tgt_poll_group_000", 00:21:10.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.553 "listen_address": { 00:21:10.553 "trtype": "TCP", 00:21:10.553 "adrfam": "IPv4", 00:21:10.553 "traddr": "10.0.0.2", 00:21:10.553 "trsvcid": "4420" 00:21:10.553 }, 00:21:10.553 "peer_address": { 00:21:10.553 "trtype": "TCP", 00:21:10.553 "adrfam": "IPv4", 00:21:10.553 "traddr": "10.0.0.1", 00:21:10.553 "trsvcid": "41158" 00:21:10.553 }, 00:21:10.553 "auth": { 00:21:10.553 "state": "completed", 00:21:10.553 "digest": "sha512", 00:21:10.553 "dhgroup": "null" 00:21:10.553 } 00:21:10.553 } 00:21:10.553 ]' 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.553 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.811 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:10.811 05:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:11.383 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.383 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.383 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.383 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.383 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.384 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.384 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.384 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.647 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.904 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.904 { 00:21:11.904 "cntlid": 103, 00:21:11.904 "qid": 0, 00:21:11.904 "state": "enabled", 00:21:11.904 "thread": "nvmf_tgt_poll_group_000", 00:21:11.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.904 "listen_address": { 00:21:11.904 "trtype": "TCP", 00:21:11.904 "adrfam": "IPv4", 00:21:11.904 "traddr": "10.0.0.2", 00:21:11.904 "trsvcid": "4420" 00:21:11.904 }, 00:21:11.904 "peer_address": { 00:21:11.904 "trtype": "TCP", 00:21:11.904 "adrfam": "IPv4", 00:21:11.904 "traddr": "10.0.0.1", 00:21:11.904 "trsvcid": "41204" 00:21:11.904 }, 00:21:11.904 "auth": { 00:21:11.904 "state": "completed", 00:21:11.904 "digest": "sha512", 00:21:11.904 "dhgroup": "null" 00:21:11.904 } 00:21:11.904 } 00:21:11.904 ]' 00:21:11.904 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.162 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.162 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.162 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:12.162 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.162 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.162 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.162 05:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.419 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:12.419 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.118 05:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.406 00:21:13.406 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.406 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.406 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.664 { 00:21:13.664 "cntlid": 105, 00:21:13.664 "qid": 0, 00:21:13.664 "state": "enabled", 00:21:13.664 "thread": "nvmf_tgt_poll_group_000", 00:21:13.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.664 "listen_address": { 00:21:13.664 "trtype": "TCP", 00:21:13.664 "adrfam": "IPv4", 00:21:13.664 "traddr": "10.0.0.2", 00:21:13.664 "trsvcid": "4420" 00:21:13.664 }, 00:21:13.664 "peer_address": { 00:21:13.664 "trtype": "TCP", 00:21:13.664 "adrfam": "IPv4", 00:21:13.664 "traddr": "10.0.0.1", 00:21:13.664 "trsvcid": "41228" 00:21:13.664 }, 00:21:13.664 "auth": { 00:21:13.664 "state": "completed", 00:21:13.664 "digest": "sha512", 00:21:13.664 "dhgroup": "ffdhe2048" 00:21:13.664 } 00:21:13.664 } 00:21:13.664 ]' 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.664 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.922 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:13.922 05:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.486 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.744 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.002 00:21:15.002 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.002 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.002 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.002 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.002 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.002 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.002 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.259 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.260 { 00:21:15.260 "cntlid": 107, 00:21:15.260 "qid": 0, 00:21:15.260 "state": "enabled", 00:21:15.260 "thread": "nvmf_tgt_poll_group_000", 00:21:15.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.260 "listen_address": { 00:21:15.260 "trtype": "TCP", 00:21:15.260 "adrfam": "IPv4", 00:21:15.260 "traddr": "10.0.0.2", 00:21:15.260 "trsvcid": "4420" 00:21:15.260 }, 00:21:15.260 "peer_address": { 00:21:15.260 "trtype": "TCP", 00:21:15.260 "adrfam": "IPv4", 00:21:15.260 "traddr": "10.0.0.1", 00:21:15.260 "trsvcid": "41250" 00:21:15.260 }, 00:21:15.260 "auth": { 00:21:15.260 "state": "completed", 00:21:15.260 "digest": "sha512", 00:21:15.260 "dhgroup": "ffdhe2048" 00:21:15.260 } 00:21:15.260 } 00:21:15.260 ]' 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.260 05:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.517 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:15.517 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.082 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.340 05:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.598 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.598 { 00:21:16.598 "cntlid": 109, 00:21:16.598 "qid": 0, 00:21:16.598 "state": "enabled", 00:21:16.598 "thread": "nvmf_tgt_poll_group_000", 00:21:16.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.598 "listen_address": { 00:21:16.598 "trtype": "TCP", 00:21:16.598 "adrfam": "IPv4", 00:21:16.598 "traddr": "10.0.0.2", 00:21:16.598 "trsvcid": "4420" 00:21:16.598 }, 00:21:16.598 "peer_address": { 00:21:16.598 "trtype": "TCP", 00:21:16.598 "adrfam": "IPv4", 00:21:16.598 "traddr": "10.0.0.1", 00:21:16.598 "trsvcid": "32828" 00:21:16.598 }, 00:21:16.598 "auth": { 00:21:16.598 "state": "completed", 00:21:16.598 "digest": "sha512", 00:21:16.598 "dhgroup": "ffdhe2048" 00:21:16.598 } 00:21:16.598 } 00:21:16.598 ]' 00:21:16.598 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.855 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.855 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.855 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.855 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.855 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.855 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.855 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.113 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:17.113 05:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.678 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.935 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.935 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.935 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.935 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.935 00:21:18.192 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.192 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.192 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.192 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.192 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.192 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.192 05:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.192 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.192 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.192 { 00:21:18.192 "cntlid": 111, 00:21:18.192 "qid": 0, 00:21:18.192 "state": "enabled", 00:21:18.192 "thread": "nvmf_tgt_poll_group_000", 00:21:18.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.192 "listen_address": { 00:21:18.192 "trtype": "TCP", 00:21:18.192 "adrfam": "IPv4", 00:21:18.192 "traddr": "10.0.0.2", 00:21:18.192 "trsvcid": "4420" 00:21:18.192 }, 00:21:18.192 "peer_address": { 00:21:18.192 "trtype": "TCP", 00:21:18.192 "adrfam": "IPv4", 00:21:18.192 "traddr": "10.0.0.1", 00:21:18.192 "trsvcid": "32852" 00:21:18.192 }, 00:21:18.192 "auth": { 00:21:18.192 "state": "completed", 00:21:18.192 "digest": "sha512", 00:21:18.192 "dhgroup": "ffdhe2048" 00:21:18.192 } 00:21:18.192 } 00:21:18.192 ]' 00:21:18.192 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.192 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.192 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.450 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.450 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.450 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.450 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.450 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.708 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:18.708 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.273 05:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.273 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.531 00:21:19.531 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.531 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.531 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.788 { 00:21:19.788 "cntlid": 113, 00:21:19.788 "qid": 0, 00:21:19.788 "state": "enabled", 00:21:19.788 "thread": "nvmf_tgt_poll_group_000", 00:21:19.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.788 "listen_address": { 00:21:19.788 "trtype": "TCP", 00:21:19.788 "adrfam": "IPv4", 00:21:19.788 "traddr": "10.0.0.2", 00:21:19.788 "trsvcid": "4420" 00:21:19.788 }, 00:21:19.788 "peer_address": { 00:21:19.788 "trtype": "TCP", 00:21:19.788 "adrfam": "IPv4", 00:21:19.788 "traddr": "10.0.0.1", 00:21:19.788 "trsvcid": "32874" 00:21:19.788 }, 00:21:19.788 "auth": { 00:21:19.788 "state": "completed", 00:21:19.788 "digest": "sha512", 00:21:19.788 "dhgroup": "ffdhe3072" 00:21:19.788 } 00:21:19.788 } 00:21:19.788 ]' 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.788 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.046 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.046 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.046 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.046 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.046 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.046 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:20.046 05:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.610 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.868 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.125 00:21:21.125 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.125 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.125 05:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.381 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.381 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.381 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.381 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.381 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.382 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.382 { 00:21:21.382 "cntlid": 115, 00:21:21.382 "qid": 0, 00:21:21.382 "state": "enabled", 00:21:21.382 "thread": "nvmf_tgt_poll_group_000", 00:21:21.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.382 "listen_address": { 00:21:21.382 "trtype": "TCP", 00:21:21.382 "adrfam": "IPv4", 00:21:21.382 "traddr": "10.0.0.2", 00:21:21.382 "trsvcid": "4420" 00:21:21.382 }, 00:21:21.382 "peer_address": { 00:21:21.382 "trtype": "TCP", 00:21:21.382 "adrfam": "IPv4", 00:21:21.382 "traddr": "10.0.0.1", 00:21:21.382 "trsvcid": "32904" 00:21:21.382 }, 00:21:21.382 "auth": { 00:21:21.382 "state": "completed", 00:21:21.382 "digest": "sha512", 00:21:21.382 "dhgroup": "ffdhe3072" 00:21:21.382 } 00:21:21.382 } 00:21:21.382 ]' 00:21:21.382 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.382 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.382 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.382 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.382 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.382 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.639 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.639 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.639 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:21.639 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.209 05:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.467 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.724 00:21:22.724 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.724 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.724 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.982 { 00:21:22.982 "cntlid": 117, 00:21:22.982 "qid": 0, 00:21:22.982 "state": "enabled", 00:21:22.982 "thread": "nvmf_tgt_poll_group_000", 00:21:22.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.982 "listen_address": { 00:21:22.982 "trtype": "TCP", 00:21:22.982 "adrfam": "IPv4", 00:21:22.982 "traddr": "10.0.0.2", 00:21:22.982 "trsvcid": "4420" 00:21:22.982 }, 00:21:22.982 "peer_address": { 00:21:22.982 "trtype": "TCP", 00:21:22.982 "adrfam": "IPv4", 00:21:22.982 "traddr": "10.0.0.1", 00:21:22.982 "trsvcid": "32944" 00:21:22.982 }, 00:21:22.982 "auth": { 00:21:22.982 "state": "completed", 00:21:22.982 "digest": "sha512", 00:21:22.982 "dhgroup": "ffdhe3072" 00:21:22.982 } 00:21:22.982 } 00:21:22.982 ]' 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.982 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.239 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:23.239 05:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.804 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.062 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.320 00:21:24.320 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.320 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.320 05:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.578 { 00:21:24.578 "cntlid": 119, 00:21:24.578 "qid": 0, 00:21:24.578 "state": "enabled", 00:21:24.578 "thread": "nvmf_tgt_poll_group_000", 00:21:24.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.578 "listen_address": { 00:21:24.578 "trtype": "TCP", 00:21:24.578 "adrfam": "IPv4", 00:21:24.578 "traddr": "10.0.0.2", 00:21:24.578 "trsvcid": "4420" 00:21:24.578 }, 00:21:24.578 "peer_address": { 00:21:24.578 "trtype": "TCP", 00:21:24.578 "adrfam": "IPv4", 00:21:24.578 "traddr": "10.0.0.1", 00:21:24.578 "trsvcid": "32976" 00:21:24.578 }, 00:21:24.578 "auth": { 00:21:24.578 "state": "completed", 00:21:24.578 "digest": "sha512", 00:21:24.578 "dhgroup": "ffdhe3072" 00:21:24.578 } 00:21:24.578 } 00:21:24.578 ]' 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.578 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.836 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:24.836 05:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.401 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.659 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.917 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.917 { 00:21:25.917 "cntlid": 121, 00:21:25.917 "qid": 0, 00:21:25.917 "state": "enabled", 00:21:25.917 "thread": "nvmf_tgt_poll_group_000", 00:21:25.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.917 "listen_address": { 00:21:25.917 "trtype": "TCP", 00:21:25.917 "adrfam": "IPv4", 00:21:25.917 "traddr": "10.0.0.2", 00:21:25.917 "trsvcid": "4420" 00:21:25.917 }, 00:21:25.917 "peer_address": { 00:21:25.917 "trtype": "TCP", 00:21:25.917 "adrfam": "IPv4", 00:21:25.917 "traddr": "10.0.0.1", 00:21:25.917 "trsvcid": "33006" 00:21:25.917 }, 00:21:25.917 "auth": { 00:21:25.917 "state": "completed", 00:21:25.917 "digest": "sha512", 00:21:25.917 "dhgroup": "ffdhe4096" 00:21:25.917 } 00:21:25.917 } 00:21:25.917 ]' 00:21:25.917 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.175 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.175 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.175 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.175 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.175 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.175 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.175 05:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.433 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:26.433 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.998 05:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.564 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.564 { 00:21:27.564 "cntlid": 123, 00:21:27.564 "qid": 0, 00:21:27.564 "state": "enabled", 00:21:27.564 "thread": "nvmf_tgt_poll_group_000", 00:21:27.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.564 "listen_address": { 00:21:27.564 "trtype": "TCP", 00:21:27.564 "adrfam": "IPv4", 00:21:27.564 "traddr": "10.0.0.2", 00:21:27.564 "trsvcid": "4420" 00:21:27.564 }, 00:21:27.564 "peer_address": { 00:21:27.564 "trtype": "TCP", 00:21:27.564 "adrfam": "IPv4", 00:21:27.564 "traddr": "10.0.0.1", 00:21:27.564 "trsvcid": "38580" 00:21:27.564 }, 00:21:27.564 "auth": { 00:21:27.564 "state": "completed", 00:21:27.564 "digest": "sha512", 00:21:27.564 "dhgroup": "ffdhe4096" 00:21:27.564 } 00:21:27.564 } 00:21:27.564 ]' 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.564 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.822 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.822 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.822 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.822 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:27.822 05:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.387 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.644 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.902 00:21:28.902 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.902 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.902 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.159 { 00:21:29.159 "cntlid": 125, 00:21:29.159 "qid": 0, 00:21:29.159 "state": "enabled", 00:21:29.159 "thread": "nvmf_tgt_poll_group_000", 00:21:29.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.159 "listen_address": { 00:21:29.159 "trtype": "TCP", 00:21:29.159 "adrfam": "IPv4", 00:21:29.159 "traddr": "10.0.0.2", 00:21:29.159 "trsvcid": "4420" 00:21:29.159 }, 00:21:29.159 "peer_address": { 00:21:29.159 "trtype": "TCP", 00:21:29.159 "adrfam": "IPv4", 00:21:29.159 "traddr": "10.0.0.1", 00:21:29.159 "trsvcid": "38600" 00:21:29.159 }, 00:21:29.159 "auth": { 00:21:29.159 "state": "completed", 00:21:29.159 "digest": "sha512", 00:21:29.159 "dhgroup": "ffdhe4096" 00:21:29.159 } 00:21:29.159 } 00:21:29.159 ]' 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.159 05:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.417 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.417 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.417 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.417 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.417 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.417 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:29.417 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:29.983 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.984 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.984 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.984 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.984 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.984 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.984 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:29.984 05:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.241 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:30.241 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.241 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.241 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.241 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.241 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.241 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:30.242 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.242 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.242 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.242 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.242 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.242 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.500 00:21:30.500 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.500 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.500 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.757 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.757 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.757 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.757 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.758 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.758 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.758 { 00:21:30.758 "cntlid": 127, 00:21:30.758 "qid": 0, 00:21:30.758 "state": "enabled", 00:21:30.758 "thread": "nvmf_tgt_poll_group_000", 00:21:30.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.758 "listen_address": { 00:21:30.758 "trtype": "TCP", 00:21:30.758 "adrfam": "IPv4", 00:21:30.758 "traddr": "10.0.0.2", 00:21:30.758 "trsvcid": "4420" 00:21:30.758 }, 00:21:30.758 "peer_address": { 00:21:30.758 "trtype": "TCP", 00:21:30.758 "adrfam": "IPv4", 00:21:30.758 "traddr": "10.0.0.1", 00:21:30.758 "trsvcid": "38620" 00:21:30.758 }, 00:21:30.758 "auth": { 00:21:30.758 "state": "completed", 00:21:30.758 "digest": "sha512", 00:21:30.758 "dhgroup": "ffdhe4096" 00:21:30.758 } 00:21:30.758 } 00:21:30.758 ]' 00:21:30.758 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.758 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.758 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.758 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:30.758 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.015 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.015 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.015 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.016 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:31.016 05:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.580 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.837 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.095 00:21:32.095 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.095 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.095 05:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.352 { 00:21:32.352 "cntlid": 129, 00:21:32.352 "qid": 0, 00:21:32.352 "state": "enabled", 00:21:32.352 "thread": "nvmf_tgt_poll_group_000", 00:21:32.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.352 "listen_address": { 00:21:32.352 "trtype": "TCP", 00:21:32.352 "adrfam": "IPv4", 00:21:32.352 "traddr": "10.0.0.2", 00:21:32.352 "trsvcid": "4420" 00:21:32.352 }, 00:21:32.352 "peer_address": { 00:21:32.352 "trtype": "TCP", 00:21:32.352 "adrfam": "IPv4", 00:21:32.352 "traddr": "10.0.0.1", 00:21:32.352 "trsvcid": "38652" 00:21:32.352 }, 00:21:32.352 "auth": { 00:21:32.352 "state": "completed", 00:21:32.352 "digest": "sha512", 00:21:32.352 "dhgroup": "ffdhe6144" 00:21:32.352 } 00:21:32.352 } 00:21:32.352 ]' 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.352 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.610 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.610 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.610 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.610 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.610 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.610 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:32.610 05:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:33.175 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.433 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.998 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.998 { 00:21:33.998 "cntlid": 131, 00:21:33.998 "qid": 0, 00:21:33.998 "state": "enabled", 00:21:33.998 "thread": "nvmf_tgt_poll_group_000", 00:21:33.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.998 "listen_address": { 00:21:33.998 "trtype": "TCP", 00:21:33.998 "adrfam": "IPv4", 00:21:33.998 "traddr": "10.0.0.2", 00:21:33.998 "trsvcid": "4420" 00:21:33.998 }, 00:21:33.998 "peer_address": { 00:21:33.998 "trtype": "TCP", 00:21:33.998 "adrfam": "IPv4", 00:21:33.998 "traddr": "10.0.0.1", 00:21:33.998 "trsvcid": "38688" 00:21:33.998 }, 00:21:33.998 "auth": { 00:21:33.998 "state": "completed", 00:21:33.998 "digest": "sha512", 00:21:33.998 "dhgroup": "ffdhe6144" 00:21:33.998 } 00:21:33.998 } 00:21:33.998 ]' 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.998 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.255 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.255 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.255 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.255 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.256 05:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.513 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:34.513 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.078 05:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.643 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.643 { 00:21:35.643 "cntlid": 133, 00:21:35.643 "qid": 0, 00:21:35.643 "state": "enabled", 00:21:35.643 "thread": "nvmf_tgt_poll_group_000", 00:21:35.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.643 "listen_address": { 00:21:35.643 "trtype": "TCP", 00:21:35.643 "adrfam": "IPv4", 00:21:35.643 "traddr": "10.0.0.2", 00:21:35.643 "trsvcid": "4420" 00:21:35.643 }, 00:21:35.643 "peer_address": { 00:21:35.643 "trtype": "TCP", 00:21:35.643 "adrfam": "IPv4", 00:21:35.643 "traddr": "10.0.0.1", 00:21:35.643 "trsvcid": "38724" 00:21:35.643 }, 00:21:35.643 "auth": { 00:21:35.643 "state": "completed", 00:21:35.643 "digest": "sha512", 00:21:35.643 "dhgroup": "ffdhe6144" 00:21:35.643 } 00:21:35.643 } 00:21:35.643 ]' 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.643 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.901 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.901 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.901 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.901 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.901 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.901 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:35.901 05:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:36.465 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.723 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.289 00:21:37.289 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.289 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.289 05:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.289 { 00:21:37.289 "cntlid": 135, 00:21:37.289 "qid": 0, 00:21:37.289 "state": "enabled", 00:21:37.289 "thread": "nvmf_tgt_poll_group_000", 00:21:37.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.289 "listen_address": { 00:21:37.289 "trtype": "TCP", 00:21:37.289 "adrfam": "IPv4", 00:21:37.289 "traddr": "10.0.0.2", 00:21:37.289 "trsvcid": "4420" 00:21:37.289 }, 00:21:37.289 "peer_address": { 00:21:37.289 "trtype": "TCP", 00:21:37.289 "adrfam": "IPv4", 00:21:37.289 "traddr": "10.0.0.1", 00:21:37.289 "trsvcid": "56518" 00:21:37.289 }, 00:21:37.289 "auth": { 00:21:37.289 "state": "completed", 00:21:37.289 "digest": "sha512", 00:21:37.289 "dhgroup": "ffdhe6144" 00:21:37.289 } 00:21:37.289 } 00:21:37.289 ]' 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.289 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.547 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.547 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.547 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.547 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.547 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.547 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:37.547 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.147 05:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.405 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.970 00:21:38.970 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.970 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.970 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.228 { 00:21:39.228 "cntlid": 137, 00:21:39.228 "qid": 0, 00:21:39.228 "state": "enabled", 00:21:39.228 "thread": "nvmf_tgt_poll_group_000", 00:21:39.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.228 "listen_address": { 00:21:39.228 "trtype": "TCP", 00:21:39.228 "adrfam": "IPv4", 00:21:39.228 "traddr": "10.0.0.2", 00:21:39.228 "trsvcid": "4420" 00:21:39.228 }, 00:21:39.228 "peer_address": { 00:21:39.228 "trtype": "TCP", 00:21:39.228 "adrfam": "IPv4", 00:21:39.228 "traddr": "10.0.0.1", 00:21:39.228 "trsvcid": "56554" 00:21:39.228 }, 00:21:39.228 "auth": { 00:21:39.228 "state": "completed", 00:21:39.228 "digest": "sha512", 00:21:39.228 "dhgroup": "ffdhe8192" 00:21:39.228 } 00:21:39.228 } 00:21:39.228 ]' 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.228 05:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.486 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:39.486 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.052 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.310 05:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.876 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.876 { 00:21:40.876 "cntlid": 139, 00:21:40.876 "qid": 0, 00:21:40.876 "state": "enabled", 00:21:40.876 "thread": "nvmf_tgt_poll_group_000", 00:21:40.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.876 "listen_address": { 00:21:40.876 "trtype": "TCP", 00:21:40.876 "adrfam": "IPv4", 00:21:40.876 "traddr": "10.0.0.2", 00:21:40.876 "trsvcid": "4420" 00:21:40.876 }, 00:21:40.876 "peer_address": { 00:21:40.876 "trtype": "TCP", 00:21:40.876 "adrfam": "IPv4", 00:21:40.876 "traddr": "10.0.0.1", 00:21:40.876 "trsvcid": "56594" 00:21:40.876 }, 00:21:40.876 "auth": { 00:21:40.876 "state": "completed", 00:21:40.876 "digest": "sha512", 00:21:40.876 "dhgroup": "ffdhe8192" 00:21:40.876 } 00:21:40.876 } 00:21:40.876 ]' 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.876 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.134 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.134 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.134 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.134 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:41.134 05:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: --dhchap-ctrl-secret DHHC-1:02:NGM1MTA3MjFjMjczYjM3YmI4Njc4MTE0MGIyZDVmZWYzNmFkNmYzY2ZmNmQ1Y2Q4uGqw4g==: 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.697 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.955 05:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.520 00:21:42.520 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.520 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.520 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.778 { 00:21:42.778 "cntlid": 141, 00:21:42.778 "qid": 0, 00:21:42.778 "state": "enabled", 00:21:42.778 "thread": "nvmf_tgt_poll_group_000", 00:21:42.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:42.778 "listen_address": { 00:21:42.778 "trtype": "TCP", 00:21:42.778 "adrfam": "IPv4", 00:21:42.778 "traddr": "10.0.0.2", 00:21:42.778 "trsvcid": "4420" 00:21:42.778 }, 00:21:42.778 "peer_address": { 00:21:42.778 "trtype": "TCP", 00:21:42.778 "adrfam": "IPv4", 00:21:42.778 "traddr": "10.0.0.1", 00:21:42.778 "trsvcid": "56630" 00:21:42.778 }, 00:21:42.778 "auth": { 00:21:42.778 "state": "completed", 00:21:42.778 "digest": "sha512", 00:21:42.778 "dhgroup": "ffdhe8192" 00:21:42.778 } 00:21:42.778 } 00:21:42.778 ]' 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.778 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.036 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:43.036 05:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:01:NzQxNWIxNGI3MWI3ZDlkNmY0OTE0YWU0MzA3OGYwNTAj1cFw: 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.601 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.860 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.117 00:21:44.375 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.375 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.375 05:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.375 { 00:21:44.375 "cntlid": 143, 00:21:44.375 "qid": 0, 00:21:44.375 "state": "enabled", 00:21:44.375 "thread": "nvmf_tgt_poll_group_000", 00:21:44.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.375 "listen_address": { 00:21:44.375 "trtype": "TCP", 00:21:44.375 "adrfam": "IPv4", 00:21:44.375 "traddr": "10.0.0.2", 00:21:44.375 "trsvcid": "4420" 00:21:44.375 }, 00:21:44.375 "peer_address": { 00:21:44.375 "trtype": "TCP", 00:21:44.375 "adrfam": "IPv4", 00:21:44.375 "traddr": "10.0.0.1", 00:21:44.375 "trsvcid": "56656" 00:21:44.375 }, 00:21:44.375 "auth": { 00:21:44.375 "state": "completed", 00:21:44.375 "digest": "sha512", 00:21:44.375 "dhgroup": "ffdhe8192" 00:21:44.375 } 00:21:44.375 } 00:21:44.375 ]' 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.375 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.633 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.633 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.633 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.633 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.633 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.633 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.891 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:44.891 05:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.456 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.457 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.457 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.457 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.457 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.457 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.457 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.022 00:21:46.022 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.022 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.022 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.280 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.280 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.280 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.280 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.280 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.280 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.280 { 00:21:46.280 "cntlid": 145, 00:21:46.280 "qid": 0, 00:21:46.280 "state": "enabled", 00:21:46.280 "thread": "nvmf_tgt_poll_group_000", 00:21:46.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.280 "listen_address": { 00:21:46.280 "trtype": "TCP", 00:21:46.280 "adrfam": "IPv4", 00:21:46.280 "traddr": "10.0.0.2", 00:21:46.280 "trsvcid": "4420" 00:21:46.280 }, 00:21:46.280 "peer_address": { 00:21:46.280 "trtype": "TCP", 00:21:46.280 "adrfam": "IPv4", 00:21:46.280 "traddr": "10.0.0.1", 00:21:46.280 "trsvcid": "56680" 00:21:46.280 }, 00:21:46.280 "auth": { 00:21:46.280 "state": "completed", 00:21:46.280 "digest": "sha512", 00:21:46.280 "dhgroup": "ffdhe8192" 00:21:46.280 } 00:21:46.280 } 00:21:46.280 ]' 00:21:46.280 05:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.280 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.280 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.280 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.280 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.280 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.280 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.280 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.538 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:46.538 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Yzg5NTEzYjhhMzRhNzk3ZWQyODc0ODlhZWIxOTMwNzcwMDA2ZGEwNzNlNmJjMzY43fu/QQ==: --dhchap-ctrl-secret DHHC-1:03:ZmM5MjcxZmU4ZWE2YzBkOWU5YjEzZDQwYjZkY2Q4ZGNlYjAwYjlmYzdiYzNlNjczYWQzOGNiOGNjYWQyNTY5OOPHFKw=: 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:47.104 05:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:47.670 request: 00:21:47.670 { 00:21:47.670 "name": "nvme0", 00:21:47.670 "trtype": "tcp", 00:21:47.670 "traddr": "10.0.0.2", 00:21:47.670 "adrfam": "ipv4", 00:21:47.670 "trsvcid": "4420", 00:21:47.670 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.670 "prchk_reftag": false, 00:21:47.670 "prchk_guard": false, 00:21:47.670 "hdgst": false, 00:21:47.670 "ddgst": false, 00:21:47.670 "dhchap_key": "key2", 00:21:47.670 "allow_unrecognized_csi": false, 00:21:47.670 "method": "bdev_nvme_attach_controller", 00:21:47.670 "req_id": 1 00:21:47.670 } 00:21:47.670 Got JSON-RPC error response 00:21:47.670 response: 00:21:47.670 { 00:21:47.670 "code": -5, 00:21:47.670 "message": "Input/output error" 00:21:47.670 } 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.670 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:48.236 request: 00:21:48.236 { 00:21:48.236 "name": "nvme0", 00:21:48.236 "trtype": "tcp", 00:21:48.236 "traddr": "10.0.0.2", 00:21:48.236 "adrfam": "ipv4", 00:21:48.236 "trsvcid": "4420", 00:21:48.236 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:48.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.236 "prchk_reftag": false, 00:21:48.236 "prchk_guard": false, 00:21:48.236 "hdgst": false, 00:21:48.236 "ddgst": false, 00:21:48.236 "dhchap_key": "key1", 00:21:48.236 "dhchap_ctrlr_key": "ckey2", 00:21:48.236 "allow_unrecognized_csi": false, 00:21:48.236 "method": "bdev_nvme_attach_controller", 00:21:48.236 "req_id": 1 00:21:48.236 } 00:21:48.236 Got JSON-RPC error response 00:21:48.236 response: 00:21:48.236 { 00:21:48.236 "code": -5, 00:21:48.236 "message": "Input/output error" 00:21:48.236 } 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.236 05:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.494 request: 00:21:48.494 { 00:21:48.494 "name": "nvme0", 00:21:48.494 "trtype": "tcp", 00:21:48.494 "traddr": "10.0.0.2", 00:21:48.494 "adrfam": "ipv4", 00:21:48.494 "trsvcid": "4420", 00:21:48.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:48.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.494 "prchk_reftag": false, 00:21:48.494 "prchk_guard": false, 00:21:48.494 "hdgst": false, 00:21:48.494 "ddgst": false, 00:21:48.494 "dhchap_key": "key1", 00:21:48.494 "dhchap_ctrlr_key": "ckey1", 00:21:48.494 "allow_unrecognized_csi": false, 00:21:48.494 "method": "bdev_nvme_attach_controller", 00:21:48.494 "req_id": 1 00:21:48.494 } 00:21:48.494 Got JSON-RPC error response 00:21:48.494 response: 00:21:48.494 { 00:21:48.494 "code": -5, 00:21:48.494 "message": "Input/output error" 00:21:48.494 } 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3357927 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3357927 ']' 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3357927 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3357927 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3357927' 00:21:48.494 killing process with pid 3357927 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3357927 00:21:48.494 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3357927 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3378757 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3378757 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3378757 ']' 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.752 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3378757 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3378757 ']' 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.010 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.268 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.268 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:49.268 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:49.268 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.268 05:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.268 null0 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.A01 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.eIC ]] 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eIC 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vc9 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.MFx ]] 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MFx 00:21:49.268 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5NZ 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.gcd ]] 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gcd 00:21:49.269 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uoQ 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.527 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.092 nvme0n1 00:21:50.092 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.092 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.092 05:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.404 { 00:21:50.404 "cntlid": 1, 00:21:50.404 "qid": 0, 00:21:50.404 "state": "enabled", 00:21:50.404 "thread": "nvmf_tgt_poll_group_000", 00:21:50.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.404 "listen_address": { 00:21:50.404 "trtype": "TCP", 00:21:50.404 "adrfam": "IPv4", 00:21:50.404 "traddr": "10.0.0.2", 00:21:50.404 "trsvcid": "4420" 00:21:50.404 }, 00:21:50.404 "peer_address": { 00:21:50.404 "trtype": "TCP", 00:21:50.404 "adrfam": "IPv4", 00:21:50.404 "traddr": "10.0.0.1", 00:21:50.404 "trsvcid": "41394" 00:21:50.404 }, 00:21:50.404 "auth": { 00:21:50.404 "state": "completed", 00:21:50.404 "digest": "sha512", 00:21:50.404 "dhgroup": "ffdhe8192" 00:21:50.404 } 00:21:50.404 } 00:21:50.404 ]' 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.404 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.703 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:50.703 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:51.269 05:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:51.269 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.527 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.785 request: 00:21:51.785 { 00:21:51.785 "name": "nvme0", 00:21:51.785 "trtype": "tcp", 00:21:51.785 "traddr": "10.0.0.2", 00:21:51.785 "adrfam": "ipv4", 00:21:51.785 "trsvcid": "4420", 00:21:51.785 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:51.785 "prchk_reftag": false, 00:21:51.785 "prchk_guard": false, 00:21:51.785 "hdgst": false, 00:21:51.785 "ddgst": false, 00:21:51.785 "dhchap_key": "key3", 00:21:51.785 "allow_unrecognized_csi": false, 00:21:51.785 "method": "bdev_nvme_attach_controller", 00:21:51.785 "req_id": 1 00:21:51.785 } 00:21:51.785 Got JSON-RPC error response 00:21:51.785 response: 00:21:51.785 { 00:21:51.785 "code": -5, 00:21:51.785 "message": "Input/output error" 00:21:51.785 } 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.785 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.044 request: 00:21:52.044 { 00:21:52.044 "name": "nvme0", 00:21:52.044 "trtype": "tcp", 00:21:52.044 "traddr": "10.0.0.2", 00:21:52.044 "adrfam": "ipv4", 00:21:52.044 "trsvcid": "4420", 00:21:52.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.044 "prchk_reftag": false, 00:21:52.044 "prchk_guard": false, 00:21:52.044 "hdgst": false, 00:21:52.044 "ddgst": false, 00:21:52.044 "dhchap_key": "key3", 00:21:52.044 "allow_unrecognized_csi": false, 00:21:52.044 "method": "bdev_nvme_attach_controller", 00:21:52.044 "req_id": 1 00:21:52.044 } 00:21:52.044 Got JSON-RPC error response 00:21:52.044 response: 00:21:52.044 { 00:21:52.044 "code": -5, 00:21:52.044 "message": "Input/output error" 00:21:52.044 } 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:52.044 05:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.302 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.560 request: 00:21:52.560 { 00:21:52.560 "name": "nvme0", 00:21:52.560 "trtype": "tcp", 00:21:52.560 "traddr": "10.0.0.2", 00:21:52.560 "adrfam": "ipv4", 00:21:52.560 "trsvcid": "4420", 00:21:52.560 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.560 "prchk_reftag": false, 00:21:52.560 "prchk_guard": false, 00:21:52.560 "hdgst": false, 00:21:52.560 "ddgst": false, 00:21:52.560 "dhchap_key": "key0", 00:21:52.560 "dhchap_ctrlr_key": "key1", 00:21:52.560 "allow_unrecognized_csi": false, 00:21:52.560 "method": "bdev_nvme_attach_controller", 00:21:52.560 "req_id": 1 00:21:52.560 } 00:21:52.560 Got JSON-RPC error response 00:21:52.560 response: 00:21:52.560 { 00:21:52.560 "code": -5, 00:21:52.560 "message": "Input/output error" 00:21:52.560 } 00:21:52.560 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:52.560 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.560 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.560 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.560 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:52.560 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:52.560 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:52.820 nvme0n1 00:21:52.820 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:52.820 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:52.820 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.079 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.079 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.079 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.337 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:53.337 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.337 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.337 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.337 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:53.337 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:53.337 05:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:53.903 nvme0n1 00:21:53.903 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:53.903 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:53.903 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:54.161 05:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.419 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.419 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:54.419 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: --dhchap-ctrl-secret DHHC-1:03:NGQ0MzJhMTIzMDFjNjkxZWVhNTZkOGFiYzBlNjM0OWZjODYyZWIzZWUyZjBmZDM5NzY3YzRkODU5ZDU1NjM5YtftUto=: 00:21:54.984 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:54.984 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:54.984 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:54.984 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:54.984 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:54.984 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:54.984 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:54.985 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.985 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:55.243 05:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:55.501 request: 00:21:55.501 { 00:21:55.501 "name": "nvme0", 00:21:55.501 "trtype": "tcp", 00:21:55.501 "traddr": "10.0.0.2", 00:21:55.501 "adrfam": "ipv4", 00:21:55.501 "trsvcid": "4420", 00:21:55.501 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:55.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:55.501 "prchk_reftag": false, 00:21:55.501 "prchk_guard": false, 00:21:55.501 "hdgst": false, 00:21:55.501 "ddgst": false, 00:21:55.501 "dhchap_key": "key1", 00:21:55.501 "allow_unrecognized_csi": false, 00:21:55.501 "method": "bdev_nvme_attach_controller", 00:21:55.501 "req_id": 1 00:21:55.501 } 00:21:55.501 Got JSON-RPC error response 00:21:55.501 response: 00:21:55.501 { 00:21:55.501 "code": -5, 00:21:55.501 "message": "Input/output error" 00:21:55.501 } 00:21:55.501 05:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:55.501 05:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.501 05:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.501 05:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.501 05:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.501 05:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:55.501 05:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:56.435 nvme0n1 00:21:56.435 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:56.435 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:56.435 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.435 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.435 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.435 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.693 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.693 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.693 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.693 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.693 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:56.693 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:56.693 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:56.951 nvme0n1 00:21:56.951 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:56.951 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:56.951 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.210 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.210 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.210 05:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: '' 2s 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: ]] 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjBiNjQzOTliN2JjZGU2MWY3ZjVkYzk4MDFmYzgxMzMFqMG0: 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:57.468 05:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: 2s 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: ]] 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTA5ZjY0Mzc4YzYxMDE1OTBmZWI5NzYwYThkNTk5YWQ2ODFjNDQ1NjdmNmI4MzExeRxLVw==: 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:59.368 05:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:01.897 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:02.156 nvme0n1 00:22:02.156 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:02.156 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.156 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.156 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.156 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:02.156 05:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:02.722 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:02.722 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:02.722 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.980 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.980 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:02.980 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.980 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.980 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.980 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:02.980 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:03.239 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:03.239 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:03.239 05:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.239 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.806 request: 00:22:03.806 { 00:22:03.806 "name": "nvme0", 00:22:03.806 "dhchap_key": "key1", 00:22:03.806 "dhchap_ctrlr_key": "key3", 00:22:03.806 "method": "bdev_nvme_set_keys", 00:22:03.806 "req_id": 1 00:22:03.806 } 00:22:03.806 Got JSON-RPC error response 00:22:03.806 response: 00:22:03.806 { 00:22:03.806 "code": -13, 00:22:03.806 "message": "Permission denied" 00:22:03.806 } 00:22:03.806 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:03.806 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.806 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.806 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.806 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:03.806 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:03.806 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.065 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:04.065 05:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:04.998 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:04.998 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:04.998 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:05.256 05:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:05.822 nvme0n1 00:22:05.822 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:05.822 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.822 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.080 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.080 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.080 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:06.080 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.080 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:06.080 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.081 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:06.081 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.081 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.081 05:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:06.339 request: 00:22:06.339 { 00:22:06.339 "name": "nvme0", 00:22:06.339 "dhchap_key": "key2", 00:22:06.339 "dhchap_ctrlr_key": "key0", 00:22:06.339 "method": "bdev_nvme_set_keys", 00:22:06.339 "req_id": 1 00:22:06.339 } 00:22:06.339 Got JSON-RPC error response 00:22:06.339 response: 00:22:06.339 { 00:22:06.339 "code": -13, 00:22:06.339 "message": "Permission denied" 00:22:06.339 } 00:22:06.339 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.339 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.339 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.339 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.339 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:06.339 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:06.339 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.597 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:06.597 05:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:07.532 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:07.532 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:07.532 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3358047 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3358047 ']' 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3358047 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3358047 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3358047' 00:22:07.790 killing process with pid 3358047 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3358047 00:22:07.790 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3358047 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.357 rmmod nvme_tcp 00:22:08.357 rmmod nvme_fabrics 00:22:08.357 rmmod nvme_keyring 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 3378757 ']' 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 3378757 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3378757 ']' 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3378757 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.357 05:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3378757 00:22:08.357 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:08.357 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:08.357 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3378757' 00:22:08.357 killing process with pid 3378757 00:22:08.358 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3378757 00:22:08.358 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3378757 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.616 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.617 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.617 05:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.A01 /tmp/spdk.key-sha256.vc9 /tmp/spdk.key-sha384.5NZ /tmp/spdk.key-sha512.uoQ /tmp/spdk.key-sha512.eIC /tmp/spdk.key-sha384.MFx /tmp/spdk.key-sha256.gcd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:10.522 00:22:10.522 real 2m29.665s 00:22:10.522 user 5m44.934s 00:22:10.522 sys 0m23.590s 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 ************************************ 00:22:10.522 END TEST nvmf_auth_target 00:22:10.522 ************************************ 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.522 05:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 ************************************ 00:22:10.522 START TEST nvmf_bdevio_no_huge 00:22:10.522 ************************************ 00:22:10.523 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:10.781 * Looking for test storage... 00:22:10.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:10.781 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:10.781 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:22:10.781 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:10.781 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:10.781 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.781 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:10.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.782 --rc genhtml_branch_coverage=1 00:22:10.782 --rc genhtml_function_coverage=1 00:22:10.782 --rc genhtml_legend=1 00:22:10.782 --rc geninfo_all_blocks=1 00:22:10.782 --rc geninfo_unexecuted_blocks=1 00:22:10.782 00:22:10.782 ' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:10.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.782 --rc genhtml_branch_coverage=1 00:22:10.782 --rc genhtml_function_coverage=1 00:22:10.782 --rc genhtml_legend=1 00:22:10.782 --rc geninfo_all_blocks=1 00:22:10.782 --rc geninfo_unexecuted_blocks=1 00:22:10.782 00:22:10.782 ' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:10.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.782 --rc genhtml_branch_coverage=1 00:22:10.782 --rc genhtml_function_coverage=1 00:22:10.782 --rc genhtml_legend=1 00:22:10.782 --rc geninfo_all_blocks=1 00:22:10.782 --rc geninfo_unexecuted_blocks=1 00:22:10.782 00:22:10.782 ' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:10.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.782 --rc genhtml_branch_coverage=1 00:22:10.782 --rc genhtml_function_coverage=1 00:22:10.782 --rc genhtml_legend=1 00:22:10.782 --rc geninfo_all_blocks=1 00:22:10.782 --rc geninfo_unexecuted_blocks=1 00:22:10.782 00:22:10.782 ' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:10.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:10.782 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.783 05:50:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:16.052 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:16.052 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:16.052 Found net devices under 0000:af:00.0: cvl_0_0 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:16.052 Found net devices under 0000:af:00.1: cvl_0_1 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.052 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.311 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.311 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.311 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:16.311 05:50:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.311 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.311 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:16.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:22:16.312 00:22:16.312 --- 10.0.0.2 ping statistics --- 00:22:16.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.312 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:22:16.312 00:22:16.312 --- 10.0.0.1 ping statistics --- 00:22:16.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.312 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=3385484 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 3385484 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3385484 ']' 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.312 05:50:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.570 [2024-12-16 05:50:50.217534] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:16.570 [2024-12-16 05:50:50.217585] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:16.570 [2024-12-16 05:50:50.279953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.570 [2024-12-16 05:50:50.343508] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.570 [2024-12-16 05:50:50.343546] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.570 [2024-12-16 05:50:50.343553] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.570 [2024-12-16 05:50:50.343559] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.570 [2024-12-16 05:50:50.343564] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.570 [2024-12-16 05:50:50.343634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:16.570 [2024-12-16 05:50:50.343742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:22:16.570 [2024-12-16 05:50:50.343827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.570 [2024-12-16 05:50:50.343829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:22:17.503 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.503 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:17.503 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:17.503 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 [2024-12-16 05:50:51.102830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 Malloc0 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 [2024-12-16 05:50:51.139091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:17.504 { 00:22:17.504 "params": { 00:22:17.504 "name": "Nvme$subsystem", 00:22:17.504 "trtype": "$TEST_TRANSPORT", 00:22:17.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.504 "adrfam": "ipv4", 00:22:17.504 "trsvcid": "$NVMF_PORT", 00:22:17.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.504 "hdgst": ${hdgst:-false}, 00:22:17.504 "ddgst": ${ddgst:-false} 00:22:17.504 }, 00:22:17.504 "method": "bdev_nvme_attach_controller" 00:22:17.504 } 00:22:17.504 EOF 00:22:17.504 )") 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:22:17.504 05:50:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:17.504 "params": { 00:22:17.504 "name": "Nvme1", 00:22:17.504 "trtype": "tcp", 00:22:17.504 "traddr": "10.0.0.2", 00:22:17.504 "adrfam": "ipv4", 00:22:17.504 "trsvcid": "4420", 00:22:17.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.504 "hdgst": false, 00:22:17.504 "ddgst": false 00:22:17.504 }, 00:22:17.504 "method": "bdev_nvme_attach_controller" 00:22:17.504 }' 00:22:17.504 [2024-12-16 05:50:51.188793] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:17.504 [2024-12-16 05:50:51.188839] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3385728 ] 00:22:17.504 [2024-12-16 05:50:51.246467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.504 [2024-12-16 05:50:51.312678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.504 [2024-12-16 05:50:51.312776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.504 [2024-12-16 05:50:51.312778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.762 I/O targets: 00:22:17.762 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:17.762 00:22:17.762 00:22:17.762 CUnit - A unit testing framework for C - Version 2.1-3 00:22:17.762 http://cunit.sourceforge.net/ 00:22:17.762 00:22:17.762 00:22:17.762 Suite: bdevio tests on: Nvme1n1 00:22:17.762 Test: blockdev write read block ...passed 00:22:17.762 Test: blockdev write zeroes read block ...passed 00:22:17.762 Test: blockdev write zeroes read no split ...passed 00:22:17.762 Test: blockdev write zeroes read split ...passed 00:22:18.019 Test: blockdev write zeroes read split partial ...passed 00:22:18.019 Test: blockdev reset ...[2024-12-16 05:50:51.635152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:18.019 [2024-12-16 05:50:51.635213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805390 (9): Bad file descriptor 00:22:18.019 [2024-12-16 05:50:51.665031] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:18.019 passed 00:22:18.020 Test: blockdev write read 8 blocks ...passed 00:22:18.020 Test: blockdev write read size > 128k ...passed 00:22:18.020 Test: blockdev write read invalid size ...passed 00:22:18.020 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.020 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.020 Test: blockdev write read max offset ...passed 00:22:18.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.020 Test: blockdev writev readv 8 blocks ...passed 00:22:18.020 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.278 Test: blockdev writev readv block ...passed 00:22:18.278 Test: blockdev writev readv size > 128k ...passed 00:22:18.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.278 Test: blockdev comparev and writev ...[2024-12-16 05:50:51.918636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.918669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:51.918684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.918691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:51.918945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.918955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:51.918967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.918973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:51.919204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.919213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:51.919225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.919233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:51.919478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.919492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:51.919503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.278 [2024-12-16 05:50:51.919509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.278 passed 00:22:18.278 Test: blockdev nvme passthru rw ...passed 00:22:18.278 Test: blockdev nvme passthru vendor specific ...[2024-12-16 05:50:52.001202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.278 [2024-12-16 05:50:52.001220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:52.001331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.278 [2024-12-16 05:50:52.001341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:52.001443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.278 [2024-12-16 05:50:52.001452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.278 [2024-12-16 05:50:52.001551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.278 [2024-12-16 05:50:52.001560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.278 passed 00:22:18.278 Test: blockdev nvme admin passthru ...passed 00:22:18.278 Test: blockdev copy ...passed 00:22:18.278 00:22:18.278 Run Summary: Type Total Ran Passed Failed Inactive 00:22:18.278 suites 1 1 n/a 0 0 00:22:18.278 tests 23 23 23 0 0 00:22:18.278 asserts 152 152 152 0 n/a 00:22:18.278 00:22:18.278 Elapsed time = 1.163 seconds 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.536 rmmod nvme_tcp 00:22:18.536 rmmod nvme_fabrics 00:22:18.536 rmmod nvme_keyring 00:22:18.536 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 3385484 ']' 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 3385484 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3385484 ']' 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3385484 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3385484 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3385484' 00:22:18.794 killing process with pid 3385484 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3385484 00:22:18.794 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3385484 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.053 05:50:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.586 00:22:21.586 real 0m10.461s 00:22:21.586 user 0m13.148s 00:22:21.586 sys 0m5.069s 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.586 ************************************ 00:22:21.586 END TEST nvmf_bdevio_no_huge 00:22:21.586 ************************************ 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:21.586 ************************************ 00:22:21.586 START TEST nvmf_tls 00:22:21.586 ************************************ 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.586 * Looking for test storage... 00:22:21.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:22:21.586 05:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.586 --rc genhtml_branch_coverage=1 00:22:21.586 --rc genhtml_function_coverage=1 00:22:21.586 --rc genhtml_legend=1 00:22:21.586 --rc geninfo_all_blocks=1 00:22:21.586 --rc geninfo_unexecuted_blocks=1 00:22:21.586 00:22:21.586 ' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.586 --rc genhtml_branch_coverage=1 00:22:21.586 --rc genhtml_function_coverage=1 00:22:21.586 --rc genhtml_legend=1 00:22:21.586 --rc geninfo_all_blocks=1 00:22:21.586 --rc geninfo_unexecuted_blocks=1 00:22:21.586 00:22:21.586 ' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.586 --rc genhtml_branch_coverage=1 00:22:21.586 --rc genhtml_function_coverage=1 00:22:21.586 --rc genhtml_legend=1 00:22:21.586 --rc geninfo_all_blocks=1 00:22:21.586 --rc geninfo_unexecuted_blocks=1 00:22:21.586 00:22:21.586 ' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.586 --rc genhtml_branch_coverage=1 00:22:21.586 --rc genhtml_function_coverage=1 00:22:21.586 --rc genhtml_legend=1 00:22:21.586 --rc geninfo_all_blocks=1 00:22:21.586 --rc geninfo_unexecuted_blocks=1 00:22:21.586 00:22:21.586 ' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.586 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.587 05:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:26.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:26.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:26.857 Found net devices under 0000:af:00.0: cvl_0_0 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:26.857 Found net devices under 0000:af:00.1: cvl_0_1 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.857 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:22:26.858 00:22:26.858 --- 10.0.0.2 ping statistics --- 00:22:26.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.858 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:22:26.858 00:22:26.858 --- 10.0.0.1 ping statistics --- 00:22:26.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.858 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:26.858 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3389469 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3389469 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3389469 ']' 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.116 [2024-12-16 05:51:00.789293] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:27.116 [2024-12-16 05:51:00.789340] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.116 [2024-12-16 05:51:00.845188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.116 [2024-12-16 05:51:00.883688] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.116 [2024-12-16 05:51:00.883728] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.116 [2024-12-16 05:51:00.883735] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.116 [2024-12-16 05:51:00.883741] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.116 [2024-12-16 05:51:00.883746] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.116 [2024-12-16 05:51:00.883768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.116 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.373 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.373 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:27.373 05:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:27.373 true 00:22:27.373 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.373 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:27.630 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:27.630 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:27.630 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:27.888 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.888 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:27.888 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:27.888 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:27.888 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:28.146 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.146 05:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:28.404 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:28.404 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:28.404 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.404 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:28.662 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:28.662 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:28.662 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:28.662 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:28.662 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.920 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:28.920 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:28.920 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:29.178 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.178 05:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:29.178 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:29.178 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.uejUo5GShp 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.nM9TdJ9sVJ 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.uejUo5GShp 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.nM9TdJ9sVJ 00:22:29.436 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:29.695 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:29.953 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.uejUo5GShp 00:22:29.953 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.uejUo5GShp 00:22:29.953 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.953 [2024-12-16 05:51:03.744355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.953 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:30.211 05:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.469 [2024-12-16 05:51:04.113319] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.469 [2024-12-16 05:51:04.113542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.469 05:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.469 malloc0 00:22:30.469 05:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.727 05:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.uejUo5GShp 00:22:30.985 05:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:31.243 05:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uejUo5GShp 00:22:41.208 Initializing NVMe Controllers 00:22:41.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.208 Initialization complete. Launching workers. 00:22:41.208 ======================================================== 00:22:41.208 Latency(us) 00:22:41.208 Device Information : IOPS MiB/s Average min max 00:22:41.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16827.37 65.73 3803.47 754.70 6284.37 00:22:41.208 ======================================================== 00:22:41.208 Total : 16827.37 65.73 3803.47 754.70 6284.37 00:22:41.208 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uejUo5GShp 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uejUo5GShp 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3392215 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3392215 /var/tmp/bdevperf.sock 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3392215 ']' 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.208 05:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.208 [2024-12-16 05:51:15.023720] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:41.208 [2024-12-16 05:51:15.023766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392215 ] 00:22:41.466 [2024-12-16 05:51:15.072761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.466 [2024-12-16 05:51:15.111026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.466 05:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.466 05:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:41.466 05:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uejUo5GShp 00:22:41.722 05:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.722 [2024-12-16 05:51:15.552250] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.980 TLSTESTn1 00:22:41.980 05:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:41.980 Running I/O for 10 seconds... 00:22:44.289 5468.00 IOPS, 21.36 MiB/s [2024-12-16T04:51:18.860Z] 5505.00 IOPS, 21.50 MiB/s [2024-12-16T04:51:19.793Z] 5523.00 IOPS, 21.57 MiB/s [2024-12-16T04:51:21.167Z] 5538.25 IOPS, 21.63 MiB/s [2024-12-16T04:51:22.101Z] 5532.80 IOPS, 21.61 MiB/s [2024-12-16T04:51:23.034Z] 5534.67 IOPS, 21.62 MiB/s [2024-12-16T04:51:23.968Z] 5535.43 IOPS, 21.62 MiB/s [2024-12-16T04:51:24.901Z] 5549.88 IOPS, 21.68 MiB/s [2024-12-16T04:51:25.834Z] 5555.67 IOPS, 21.70 MiB/s [2024-12-16T04:51:25.834Z] 5548.80 IOPS, 21.68 MiB/s 00:22:51.978 Latency(us) 00:22:51.978 [2024-12-16T04:51:25.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.978 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:51.978 Verification LBA range: start 0x0 length 0x2000 00:22:51.978 TLSTESTn1 : 10.02 5553.10 21.69 0.00 0.00 23014.20 4743.56 21720.50 00:22:51.978 [2024-12-16T04:51:25.834Z] =================================================================================================================== 00:22:51.978 [2024-12-16T04:51:25.834Z] Total : 5553.10 21.69 0.00 0.00 23014.20 4743.56 21720.50 00:22:51.978 { 00:22:51.978 "results": [ 00:22:51.978 { 00:22:51.978 "job": "TLSTESTn1", 00:22:51.978 "core_mask": "0x4", 00:22:51.978 "workload": "verify", 00:22:51.978 "status": "finished", 00:22:51.978 "verify_range": { 00:22:51.978 "start": 0, 00:22:51.978 "length": 8192 00:22:51.978 }, 00:22:51.978 "queue_depth": 128, 00:22:51.978 "io_size": 4096, 00:22:51.978 "runtime": 10.015118, 00:22:51.978 "iops": 5553.104816138961, 00:22:51.978 "mibps": 21.691815688042816, 00:22:51.978 "io_failed": 0, 00:22:51.978 "io_timeout": 0, 00:22:51.978 "avg_latency_us": 23014.204141363025, 00:22:51.978 "min_latency_us": 4743.558095238095, 00:22:51.978 "max_latency_us": 21720.502857142856 00:22:51.978 } 00:22:51.978 ], 00:22:51.978 "core_count": 1 00:22:51.978 } 00:22:51.978 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.978 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3392215 00:22:51.978 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3392215 ']' 00:22:51.978 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3392215 00:22:51.978 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:51.978 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.978 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392215 00:22:52.236 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:52.236 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:52.237 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392215' 00:22:52.237 killing process with pid 3392215 00:22:52.237 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3392215 00:22:52.237 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.237 00:22:52.237 Latency(us) 00:22:52.237 [2024-12-16T04:51:26.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.237 [2024-12-16T04:51:26.093Z] =================================================================================================================== 00:22:52.237 [2024-12-16T04:51:26.093Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.237 05:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3392215 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nM9TdJ9sVJ 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nM9TdJ9sVJ 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nM9TdJ9sVJ 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nM9TdJ9sVJ 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3394009 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3394009 /var/tmp/bdevperf.sock 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3394009 ']' 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.237 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.237 [2024-12-16 05:51:26.066442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:52.237 [2024-12-16 05:51:26.066486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394009 ] 00:22:52.495 [2024-12-16 05:51:26.117135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.495 [2024-12-16 05:51:26.157414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.495 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.495 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:52.495 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nM9TdJ9sVJ 00:22:52.753 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:52.753 [2024-12-16 05:51:26.594806] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.753 [2024-12-16 05:51:26.606041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.753 [2024-12-16 05:51:26.606096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x828840 (107): Transport endpoint is not connected 00:22:52.753 [2024-12-16 05:51:26.607074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x828840 (9): Bad file descriptor 00:22:52.753 [2024-12-16 05:51:26.608075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.753 [2024-12-16 05:51:26.608084] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.753 [2024-12-16 05:51:26.608091] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:52.753 [2024-12-16 05:51:26.608101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.011 request: 00:22:53.011 { 00:22:53.011 "name": "TLSTEST", 00:22:53.011 "trtype": "tcp", 00:22:53.011 "traddr": "10.0.0.2", 00:22:53.011 "adrfam": "ipv4", 00:22:53.011 "trsvcid": "4420", 00:22:53.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.011 "prchk_reftag": false, 00:22:53.011 "prchk_guard": false, 00:22:53.011 "hdgst": false, 00:22:53.011 "ddgst": false, 00:22:53.011 "psk": "key0", 00:22:53.011 "allow_unrecognized_csi": false, 00:22:53.011 "method": "bdev_nvme_attach_controller", 00:22:53.011 "req_id": 1 00:22:53.011 } 00:22:53.011 Got JSON-RPC error response 00:22:53.011 response: 00:22:53.011 { 00:22:53.011 "code": -5, 00:22:53.011 "message": "Input/output error" 00:22:53.011 } 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3394009 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3394009 ']' 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3394009 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394009 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394009' 00:22:53.011 killing process with pid 3394009 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3394009 00:22:53.011 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.011 00:22:53.011 Latency(us) 00:22:53.011 [2024-12-16T04:51:26.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.011 [2024-12-16T04:51:26.867Z] =================================================================================================================== 00:22:53.011 [2024-12-16T04:51:26.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3394009 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uejUo5GShp 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uejUo5GShp 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uejUo5GShp 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uejUo5GShp 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3394061 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3394061 /var/tmp/bdevperf.sock 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3394061 ']' 00:22:53.011 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.270 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.270 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.270 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.270 05:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.270 [2024-12-16 05:51:26.909546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:53.270 [2024-12-16 05:51:26.909597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394061 ] 00:22:53.270 [2024-12-16 05:51:26.961713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.270 [2024-12-16 05:51:26.999541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.270 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.270 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:53.270 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uejUo5GShp 00:22:53.527 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:53.786 [2024-12-16 05:51:27.437139] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.786 [2024-12-16 05:51:27.448080] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.786 [2024-12-16 05:51:27.448102] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:53.786 [2024-12-16 05:51:27.448126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.786 [2024-12-16 05:51:27.448611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf86840 (107): Transport endpoint is not connected 00:22:53.786 [2024-12-16 05:51:27.449603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf86840 (9): Bad file descriptor 00:22:53.786 [2024-12-16 05:51:27.450604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.786 [2024-12-16 05:51:27.450614] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.786 [2024-12-16 05:51:27.450623] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:53.786 [2024-12-16 05:51:27.450633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.786 request: 00:22:53.786 { 00:22:53.786 "name": "TLSTEST", 00:22:53.786 "trtype": "tcp", 00:22:53.786 "traddr": "10.0.0.2", 00:22:53.786 "adrfam": "ipv4", 00:22:53.786 "trsvcid": "4420", 00:22:53.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.786 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.786 "prchk_reftag": false, 00:22:53.786 "prchk_guard": false, 00:22:53.786 "hdgst": false, 00:22:53.786 "ddgst": false, 00:22:53.786 "psk": "key0", 00:22:53.786 "allow_unrecognized_csi": false, 00:22:53.786 "method": "bdev_nvme_attach_controller", 00:22:53.786 "req_id": 1 00:22:53.786 } 00:22:53.786 Got JSON-RPC error response 00:22:53.786 response: 00:22:53.786 { 00:22:53.786 "code": -5, 00:22:53.786 "message": "Input/output error" 00:22:53.786 } 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3394061 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3394061 ']' 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3394061 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394061 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394061' 00:22:53.786 killing process with pid 3394061 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3394061 00:22:53.786 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.786 00:22:53.786 Latency(us) 00:22:53.786 [2024-12-16T04:51:27.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.786 [2024-12-16T04:51:27.642Z] =================================================================================================================== 00:22:53.786 [2024-12-16T04:51:27.642Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.786 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3394061 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uejUo5GShp 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uejUo5GShp 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uejUo5GShp 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uejUo5GShp 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3394249 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3394249 /var/tmp/bdevperf.sock 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3394249 ']' 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.045 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.045 [2024-12-16 05:51:27.735770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:54.045 [2024-12-16 05:51:27.735822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394249 ] 00:22:54.045 [2024-12-16 05:51:27.786449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.045 [2024-12-16 05:51:27.821743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.303 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.303 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:54.303 05:51:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uejUo5GShp 00:22:54.303 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.561 [2024-12-16 05:51:28.258924] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.561 [2024-12-16 05:51:28.270150] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.561 [2024-12-16 05:51:28.270172] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.561 [2024-12-16 05:51:28.270210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.561 [2024-12-16 05:51:28.271170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc73840 (107): Transport endpoint is not connected 00:22:54.561 [2024-12-16 05:51:28.272162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc73840 (9): Bad file descriptor 00:22:54.561 [2024-12-16 05:51:28.273164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.561 [2024-12-16 05:51:28.273174] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.561 [2024-12-16 05:51:28.273182] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:54.561 [2024-12-16 05:51:28.273193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.561 request: 00:22:54.561 { 00:22:54.561 "name": "TLSTEST", 00:22:54.561 "trtype": "tcp", 00:22:54.561 "traddr": "10.0.0.2", 00:22:54.561 "adrfam": "ipv4", 00:22:54.561 "trsvcid": "4420", 00:22:54.561 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.562 "prchk_reftag": false, 00:22:54.562 "prchk_guard": false, 00:22:54.562 "hdgst": false, 00:22:54.562 "ddgst": false, 00:22:54.562 "psk": "key0", 00:22:54.562 "allow_unrecognized_csi": false, 00:22:54.562 "method": "bdev_nvme_attach_controller", 00:22:54.562 "req_id": 1 00:22:54.562 } 00:22:54.562 Got JSON-RPC error response 00:22:54.562 response: 00:22:54.562 { 00:22:54.562 "code": -5, 00:22:54.562 "message": "Input/output error" 00:22:54.562 } 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3394249 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3394249 ']' 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3394249 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394249 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394249' 00:22:54.562 killing process with pid 3394249 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3394249 00:22:54.562 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.562 00:22:54.562 Latency(us) 00:22:54.562 [2024-12-16T04:51:28.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.562 [2024-12-16T04:51:28.418Z] =================================================================================================================== 00:22:54.562 [2024-12-16T04:51:28.418Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.562 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3394249 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3394468 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3394468 /var/tmp/bdevperf.sock 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3394468 ']' 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.820 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.820 [2024-12-16 05:51:28.564336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:54.820 [2024-12-16 05:51:28.564385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394468 ] 00:22:54.820 [2024-12-16 05:51:28.613988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.820 [2024-12-16 05:51:28.649428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.078 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.078 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:55.078 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:55.078 [2024-12-16 05:51:28.882058] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:55.078 [2024-12-16 05:51:28.882092] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:55.078 request: 00:22:55.078 { 00:22:55.078 "name": "key0", 00:22:55.078 "path": "", 00:22:55.078 "method": "keyring_file_add_key", 00:22:55.078 "req_id": 1 00:22:55.078 } 00:22:55.078 Got JSON-RPC error response 00:22:55.078 response: 00:22:55.078 { 00:22:55.078 "code": -1, 00:22:55.078 "message": "Operation not permitted" 00:22:55.078 } 00:22:55.078 05:51:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:55.337 [2024-12-16 05:51:29.058598] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.337 [2024-12-16 05:51:29.058622] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:55.337 request: 00:22:55.337 { 00:22:55.337 "name": "TLSTEST", 00:22:55.337 "trtype": "tcp", 00:22:55.337 "traddr": "10.0.0.2", 00:22:55.337 "adrfam": "ipv4", 00:22:55.337 "trsvcid": "4420", 00:22:55.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.337 "prchk_reftag": false, 00:22:55.337 "prchk_guard": false, 00:22:55.337 "hdgst": false, 00:22:55.337 "ddgst": false, 00:22:55.337 "psk": "key0", 00:22:55.337 "allow_unrecognized_csi": false, 00:22:55.337 "method": "bdev_nvme_attach_controller", 00:22:55.337 "req_id": 1 00:22:55.337 } 00:22:55.337 Got JSON-RPC error response 00:22:55.337 response: 00:22:55.337 { 00:22:55.337 "code": -126, 00:22:55.337 "message": "Required key not available" 00:22:55.337 } 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3394468 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3394468 ']' 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3394468 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394468 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394468' 00:22:55.337 killing process with pid 3394468 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3394468 00:22:55.337 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.337 00:22:55.337 Latency(us) 00:22:55.337 [2024-12-16T04:51:29.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.337 [2024-12-16T04:51:29.193Z] =================================================================================================================== 00:22:55.337 [2024-12-16T04:51:29.193Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.337 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3394468 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3389469 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3389469 ']' 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3389469 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3389469 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3389469' 00:22:55.595 killing process with pid 3389469 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3389469 00:22:55.595 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3389469 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.evnaAYHRVC 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.evnaAYHRVC 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3394525 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3394525 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3394525 ']' 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.854 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.854 [2024-12-16 05:51:29.626844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:55.854 [2024-12-16 05:51:29.626905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.854 [2024-12-16 05:51:29.687248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.112 [2024-12-16 05:51:29.724175] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.112 [2024-12-16 05:51:29.724213] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.112 [2024-12-16 05:51:29.724220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.112 [2024-12-16 05:51:29.724226] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.112 [2024-12-16 05:51:29.724231] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.112 [2024-12-16 05:51:29.724254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.evnaAYHRVC 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.evnaAYHRVC 00:22:56.112 05:51:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.371 [2024-12-16 05:51:30.016871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.371 05:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:56.628 05:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:56.629 [2024-12-16 05:51:30.413871] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.629 [2024-12-16 05:51:30.414086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.629 05:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.887 malloc0 00:22:56.887 05:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:57.145 05:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:22:57.145 05:51:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.evnaAYHRVC 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.evnaAYHRVC 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3394880 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3394880 /var/tmp/bdevperf.sock 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3394880 ']' 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.403 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.403 [2024-12-16 05:51:31.197929] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:57.403 [2024-12-16 05:51:31.197977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394880 ] 00:22:57.403 [2024-12-16 05:51:31.247373] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.661 [2024-12-16 05:51:31.287533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.661 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.661 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:57.661 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:22:57.919 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.919 [2024-12-16 05:51:31.720957] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.177 TLSTESTn1 00:22:58.177 05:51:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.177 Running I/O for 10 seconds... 00:23:00.484 5315.00 IOPS, 20.76 MiB/s [2024-12-16T04:51:34.905Z] 5497.50 IOPS, 21.47 MiB/s [2024-12-16T04:51:36.278Z] 5557.33 IOPS, 21.71 MiB/s [2024-12-16T04:51:37.212Z] 5450.75 IOPS, 21.29 MiB/s [2024-12-16T04:51:38.145Z] 5258.40 IOPS, 20.54 MiB/s [2024-12-16T04:51:39.080Z] 5128.67 IOPS, 20.03 MiB/s [2024-12-16T04:51:40.013Z] 5052.00 IOPS, 19.73 MiB/s [2024-12-16T04:51:40.945Z] 4978.50 IOPS, 19.45 MiB/s [2024-12-16T04:51:42.319Z] 4893.89 IOPS, 19.12 MiB/s [2024-12-16T04:51:42.319Z] 4847.20 IOPS, 18.93 MiB/s 00:23:08.463 Latency(us) 00:23:08.463 [2024-12-16T04:51:42.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.463 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.463 Verification LBA range: start 0x0 length 0x2000 00:23:08.463 TLSTESTn1 : 10.02 4851.32 18.95 0.00 0.00 26345.65 5898.24 32455.92 00:23:08.463 [2024-12-16T04:51:42.319Z] =================================================================================================================== 00:23:08.463 [2024-12-16T04:51:42.319Z] Total : 4851.32 18.95 0.00 0.00 26345.65 5898.24 32455.92 00:23:08.463 { 00:23:08.463 "results": [ 00:23:08.463 { 00:23:08.463 "job": "TLSTESTn1", 00:23:08.463 "core_mask": "0x4", 00:23:08.463 "workload": "verify", 00:23:08.463 "status": "finished", 00:23:08.463 "verify_range": { 00:23:08.463 "start": 0, 00:23:08.463 "length": 8192 00:23:08.463 }, 00:23:08.463 "queue_depth": 128, 00:23:08.463 "io_size": 4096, 00:23:08.463 "runtime": 10.017279, 00:23:08.463 "iops": 4851.317408649595, 00:23:08.463 "mibps": 18.95045862753748, 00:23:08.463 "io_failed": 0, 00:23:08.463 "io_timeout": 0, 00:23:08.463 "avg_latency_us": 26345.64866144001, 00:23:08.463 "min_latency_us": 5898.24, 00:23:08.463 "max_latency_us": 32455.92380952381 00:23:08.463 } 00:23:08.463 ], 00:23:08.463 "core_count": 1 00:23:08.463 } 00:23:08.463 05:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.463 05:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3394880 00:23:08.463 05:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3394880 ']' 00:23:08.463 05:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3394880 00:23:08.463 05:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:08.463 05:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.463 05:51:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394880 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394880' 00:23:08.463 killing process with pid 3394880 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3394880 00:23:08.463 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.463 00:23:08.463 Latency(us) 00:23:08.463 [2024-12-16T04:51:42.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.463 [2024-12-16T04:51:42.319Z] =================================================================================================================== 00:23:08.463 [2024-12-16T04:51:42.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3394880 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.evnaAYHRVC 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.evnaAYHRVC 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.evnaAYHRVC 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:08.463 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.evnaAYHRVC 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.evnaAYHRVC 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3396544 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3396544 /var/tmp/bdevperf.sock 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3396544 ']' 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.464 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.464 [2024-12-16 05:51:42.244965] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:08.464 [2024-12-16 05:51:42.245017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396544 ] 00:23:08.464 [2024-12-16 05:51:42.299606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.722 [2024-12-16 05:51:42.337073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.722 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.722 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:08.722 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:23:08.979 [2024-12-16 05:51:42.597736] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.evnaAYHRVC': 0100666 00:23:08.979 [2024-12-16 05:51:42.597766] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:08.979 request: 00:23:08.979 { 00:23:08.979 "name": "key0", 00:23:08.979 "path": "/tmp/tmp.evnaAYHRVC", 00:23:08.979 "method": "keyring_file_add_key", 00:23:08.979 "req_id": 1 00:23:08.979 } 00:23:08.979 Got JSON-RPC error response 00:23:08.979 response: 00:23:08.979 { 00:23:08.979 "code": -1, 00:23:08.979 "message": "Operation not permitted" 00:23:08.979 } 00:23:08.980 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.980 [2024-12-16 05:51:42.782292] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.980 [2024-12-16 05:51:42.782320] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:08.980 request: 00:23:08.980 { 00:23:08.980 "name": "TLSTEST", 00:23:08.980 "trtype": "tcp", 00:23:08.980 "traddr": "10.0.0.2", 00:23:08.980 "adrfam": "ipv4", 00:23:08.980 "trsvcid": "4420", 00:23:08.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.980 "prchk_reftag": false, 00:23:08.980 "prchk_guard": false, 00:23:08.980 "hdgst": false, 00:23:08.980 "ddgst": false, 00:23:08.980 "psk": "key0", 00:23:08.980 "allow_unrecognized_csi": false, 00:23:08.980 "method": "bdev_nvme_attach_controller", 00:23:08.980 "req_id": 1 00:23:08.980 } 00:23:08.980 Got JSON-RPC error response 00:23:08.980 response: 00:23:08.980 { 00:23:08.980 "code": -126, 00:23:08.980 "message": "Required key not available" 00:23:08.980 } 00:23:08.980 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3396544 00:23:08.980 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3396544 ']' 00:23:08.980 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3396544 00:23:08.980 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:08.980 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.980 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3396544 00:23:09.238 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:09.238 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:09.238 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3396544' 00:23:09.238 killing process with pid 3396544 00:23:09.238 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3396544 00:23:09.238 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.238 00:23:09.238 Latency(us) 00:23:09.238 [2024-12-16T04:51:43.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.238 [2024-12-16T04:51:43.094Z] =================================================================================================================== 00:23:09.238 [2024-12-16T04:51:43.094Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.238 05:51:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3396544 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3394525 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3394525 ']' 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3394525 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3394525 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3394525' 00:23:09.238 killing process with pid 3394525 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3394525 00:23:09.238 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3394525 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3396775 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3396775 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3396775 ']' 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.496 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.496 [2024-12-16 05:51:43.303557] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:09.496 [2024-12-16 05:51:43.303604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.755 [2024-12-16 05:51:43.361727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.755 [2024-12-16 05:51:43.399277] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.755 [2024-12-16 05:51:43.399314] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.755 [2024-12-16 05:51:43.399321] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.755 [2024-12-16 05:51:43.399327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.755 [2024-12-16 05:51:43.399332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.755 [2024-12-16 05:51:43.399367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.evnaAYHRVC 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.evnaAYHRVC 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.evnaAYHRVC 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.evnaAYHRVC 00:23:09.755 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.013 [2024-12-16 05:51:43.695561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.013 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.271 05:51:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.271 [2024-12-16 05:51:44.064516] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.271 [2024-12-16 05:51:44.064718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.271 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.529 malloc0 00:23:10.529 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:10.787 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:23:10.787 [2024-12-16 05:51:44.630066] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.evnaAYHRVC': 0100666 00:23:10.787 [2024-12-16 05:51:44.630094] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:10.787 request: 00:23:10.787 { 00:23:10.787 "name": "key0", 00:23:10.787 "path": "/tmp/tmp.evnaAYHRVC", 00:23:10.787 "method": "keyring_file_add_key", 00:23:10.787 "req_id": 1 00:23:10.787 } 00:23:10.787 Got JSON-RPC error response 00:23:10.787 response: 00:23:10.787 { 00:23:10.787 "code": -1, 00:23:10.787 "message": "Operation not permitted" 00:23:10.787 } 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.045 [2024-12-16 05:51:44.806548] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:11.045 [2024-12-16 05:51:44.806584] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:11.045 request: 00:23:11.045 { 00:23:11.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.045 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.045 "psk": "key0", 00:23:11.045 "method": "nvmf_subsystem_add_host", 00:23:11.045 "req_id": 1 00:23:11.045 } 00:23:11.045 Got JSON-RPC error response 00:23:11.045 response: 00:23:11.045 { 00:23:11.045 "code": -32603, 00:23:11.045 "message": "Internal error" 00:23:11.045 } 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3396775 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3396775 ']' 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3396775 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3396775 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3396775' 00:23:11.045 killing process with pid 3396775 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3396775 00:23:11.045 05:51:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3396775 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.evnaAYHRVC 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3397079 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3397079 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3397079 ']' 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.304 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.304 [2024-12-16 05:51:45.098346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:11.304 [2024-12-16 05:51:45.098390] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.304 [2024-12-16 05:51:45.156893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.561 [2024-12-16 05:51:45.195335] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.561 [2024-12-16 05:51:45.195374] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.561 [2024-12-16 05:51:45.195380] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.561 [2024-12-16 05:51:45.195386] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.561 [2024-12-16 05:51:45.195391] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.561 [2024-12-16 05:51:45.195412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.evnaAYHRVC 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.evnaAYHRVC 00:23:11.561 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.818 [2024-12-16 05:51:45.483965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.818 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.076 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.076 [2024-12-16 05:51:45.852915] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.076 [2024-12-16 05:51:45.853106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.076 05:51:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.334 malloc0 00:23:12.334 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:12.591 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:23:12.591 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3397420 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3397420 /var/tmp/bdevperf.sock 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3397420 ']' 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.849 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.849 [2024-12-16 05:51:46.653489] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:12.849 [2024-12-16 05:51:46.653542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397420 ] 00:23:12.849 [2024-12-16 05:51:46.703361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.107 [2024-12-16 05:51:46.743126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.107 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.107 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:13.107 05:51:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:23:13.365 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.365 [2024-12-16 05:51:47.176206] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.623 TLSTESTn1 00:23:13.623 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:13.881 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:13.881 "subsystems": [ 00:23:13.881 { 00:23:13.881 "subsystem": "keyring", 00:23:13.881 "config": [ 00:23:13.881 { 00:23:13.881 "method": "keyring_file_add_key", 00:23:13.881 "params": { 00:23:13.881 "name": "key0", 00:23:13.881 "path": "/tmp/tmp.evnaAYHRVC" 00:23:13.881 } 00:23:13.881 } 00:23:13.881 ] 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "subsystem": "iobuf", 00:23:13.881 "config": [ 00:23:13.881 { 00:23:13.881 "method": "iobuf_set_options", 00:23:13.881 "params": { 00:23:13.881 "small_pool_count": 8192, 00:23:13.881 "large_pool_count": 1024, 00:23:13.881 "small_bufsize": 8192, 00:23:13.881 "large_bufsize": 135168 00:23:13.881 } 00:23:13.881 } 00:23:13.881 ] 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "subsystem": "sock", 00:23:13.881 "config": [ 00:23:13.881 { 00:23:13.881 "method": "sock_set_default_impl", 00:23:13.881 "params": { 00:23:13.881 "impl_name": "posix" 00:23:13.881 } 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "method": "sock_impl_set_options", 00:23:13.881 "params": { 00:23:13.881 "impl_name": "ssl", 00:23:13.881 "recv_buf_size": 4096, 00:23:13.881 "send_buf_size": 4096, 00:23:13.881 "enable_recv_pipe": true, 00:23:13.881 "enable_quickack": false, 00:23:13.881 "enable_placement_id": 0, 00:23:13.881 "enable_zerocopy_send_server": true, 00:23:13.881 "enable_zerocopy_send_client": false, 00:23:13.881 "zerocopy_threshold": 0, 00:23:13.881 "tls_version": 0, 00:23:13.881 "enable_ktls": false 00:23:13.881 } 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "method": "sock_impl_set_options", 00:23:13.881 "params": { 00:23:13.881 "impl_name": "posix", 00:23:13.881 "recv_buf_size": 2097152, 00:23:13.881 "send_buf_size": 2097152, 00:23:13.881 "enable_recv_pipe": true, 00:23:13.881 "enable_quickack": false, 00:23:13.881 "enable_placement_id": 0, 00:23:13.881 "enable_zerocopy_send_server": true, 00:23:13.881 "enable_zerocopy_send_client": false, 00:23:13.881 "zerocopy_threshold": 0, 00:23:13.881 "tls_version": 0, 00:23:13.881 "enable_ktls": false 00:23:13.881 } 00:23:13.881 } 00:23:13.881 ] 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "subsystem": "vmd", 00:23:13.881 "config": [] 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "subsystem": "accel", 00:23:13.881 "config": [ 00:23:13.881 { 00:23:13.881 "method": "accel_set_options", 00:23:13.881 "params": { 00:23:13.881 "small_cache_size": 128, 00:23:13.881 "large_cache_size": 16, 00:23:13.881 "task_count": 2048, 00:23:13.881 "sequence_count": 2048, 00:23:13.881 "buf_count": 2048 00:23:13.881 } 00:23:13.881 } 00:23:13.881 ] 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "subsystem": "bdev", 00:23:13.881 "config": [ 00:23:13.881 { 00:23:13.881 "method": "bdev_set_options", 00:23:13.881 "params": { 00:23:13.881 "bdev_io_pool_size": 65535, 00:23:13.881 "bdev_io_cache_size": 256, 00:23:13.881 "bdev_auto_examine": true, 00:23:13.881 "iobuf_small_cache_size": 128, 00:23:13.881 "iobuf_large_cache_size": 16 00:23:13.881 } 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "method": "bdev_raid_set_options", 00:23:13.881 "params": { 00:23:13.881 "process_window_size_kb": 1024, 00:23:13.881 "process_max_bandwidth_mb_sec": 0 00:23:13.881 } 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "method": "bdev_iscsi_set_options", 00:23:13.881 "params": { 00:23:13.881 "timeout_sec": 30 00:23:13.881 } 00:23:13.881 }, 00:23:13.881 { 00:23:13.881 "method": "bdev_nvme_set_options", 00:23:13.881 "params": { 00:23:13.881 "action_on_timeout": "none", 00:23:13.881 "timeout_us": 0, 00:23:13.881 "timeout_admin_us": 0, 00:23:13.881 "keep_alive_timeout_ms": 10000, 00:23:13.881 "arbitration_burst": 0, 00:23:13.881 "low_priority_weight": 0, 00:23:13.882 "medium_priority_weight": 0, 00:23:13.882 "high_priority_weight": 0, 00:23:13.882 "nvme_adminq_poll_period_us": 10000, 00:23:13.882 "nvme_ioq_poll_period_us": 0, 00:23:13.882 "io_queue_requests": 0, 00:23:13.882 "delay_cmd_submit": true, 00:23:13.882 "transport_retry_count": 4, 00:23:13.882 "bdev_retry_count": 3, 00:23:13.882 "transport_ack_timeout": 0, 00:23:13.882 "ctrlr_loss_timeout_sec": 0, 00:23:13.882 "reconnect_delay_sec": 0, 00:23:13.882 "fast_io_fail_timeout_sec": 0, 00:23:13.882 "disable_auto_failback": false, 00:23:13.882 "generate_uuids": false, 00:23:13.882 "transport_tos": 0, 00:23:13.882 "nvme_error_stat": false, 00:23:13.882 "rdma_srq_size": 0, 00:23:13.882 "io_path_stat": false, 00:23:13.882 "allow_accel_sequence": false, 00:23:13.882 "rdma_max_cq_size": 0, 00:23:13.882 "rdma_cm_event_timeout_ms": 0, 00:23:13.882 "dhchap_digests": [ 00:23:13.882 "sha256", 00:23:13.882 "sha384", 00:23:13.882 "sha512" 00:23:13.882 ], 00:23:13.882 "dhchap_dhgroups": [ 00:23:13.882 "null", 00:23:13.882 "ffdhe2048", 00:23:13.882 "ffdhe3072", 00:23:13.882 "ffdhe4096", 00:23:13.882 "ffdhe6144", 00:23:13.882 "ffdhe8192" 00:23:13.882 ] 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "bdev_nvme_set_hotplug", 00:23:13.882 "params": { 00:23:13.882 "period_us": 100000, 00:23:13.882 "enable": false 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "bdev_malloc_create", 00:23:13.882 "params": { 00:23:13.882 "name": "malloc0", 00:23:13.882 "num_blocks": 8192, 00:23:13.882 "block_size": 4096, 00:23:13.882 "physical_block_size": 4096, 00:23:13.882 "uuid": "21e9cd8d-c6dd-4b96-beb9-764fbbf12852", 00:23:13.882 "optimal_io_boundary": 0, 00:23:13.882 "md_size": 0, 00:23:13.882 "dif_type": 0, 00:23:13.882 "dif_is_head_of_md": false, 00:23:13.882 "dif_pi_format": 0 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "bdev_wait_for_examine" 00:23:13.882 } 00:23:13.882 ] 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "subsystem": "nbd", 00:23:13.882 "config": [] 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "subsystem": "scheduler", 00:23:13.882 "config": [ 00:23:13.882 { 00:23:13.882 "method": "framework_set_scheduler", 00:23:13.882 "params": { 00:23:13.882 "name": "static" 00:23:13.882 } 00:23:13.882 } 00:23:13.882 ] 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "subsystem": "nvmf", 00:23:13.882 "config": [ 00:23:13.882 { 00:23:13.882 "method": "nvmf_set_config", 00:23:13.882 "params": { 00:23:13.882 "discovery_filter": "match_any", 00:23:13.882 "admin_cmd_passthru": { 00:23:13.882 "identify_ctrlr": false 00:23:13.882 }, 00:23:13.882 "dhchap_digests": [ 00:23:13.882 "sha256", 00:23:13.882 "sha384", 00:23:13.882 "sha512" 00:23:13.882 ], 00:23:13.882 "dhchap_dhgroups": [ 00:23:13.882 "null", 00:23:13.882 "ffdhe2048", 00:23:13.882 "ffdhe3072", 00:23:13.882 "ffdhe4096", 00:23:13.882 "ffdhe6144", 00:23:13.882 "ffdhe8192" 00:23:13.882 ] 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "nvmf_set_max_subsystems", 00:23:13.882 "params": { 00:23:13.882 "max_subsystems": 1024 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "nvmf_set_crdt", 00:23:13.882 "params": { 00:23:13.882 "crdt1": 0, 00:23:13.882 "crdt2": 0, 00:23:13.882 "crdt3": 0 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "nvmf_create_transport", 00:23:13.882 "params": { 00:23:13.882 "trtype": "TCP", 00:23:13.882 "max_queue_depth": 128, 00:23:13.882 "max_io_qpairs_per_ctrlr": 127, 00:23:13.882 "in_capsule_data_size": 4096, 00:23:13.882 "max_io_size": 131072, 00:23:13.882 "io_unit_size": 131072, 00:23:13.882 "max_aq_depth": 128, 00:23:13.882 "num_shared_buffers": 511, 00:23:13.882 "buf_cache_size": 4294967295, 00:23:13.882 "dif_insert_or_strip": false, 00:23:13.882 "zcopy": false, 00:23:13.882 "c2h_success": false, 00:23:13.882 "sock_priority": 0, 00:23:13.882 "abort_timeout_sec": 1, 00:23:13.882 "ack_timeout": 0, 00:23:13.882 "data_wr_pool_size": 0 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "nvmf_create_subsystem", 00:23:13.882 "params": { 00:23:13.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.882 "allow_any_host": false, 00:23:13.882 "serial_number": "SPDK00000000000001", 00:23:13.882 "model_number": "SPDK bdev Controller", 00:23:13.882 "max_namespaces": 10, 00:23:13.882 "min_cntlid": 1, 00:23:13.882 "max_cntlid": 65519, 00:23:13.882 "ana_reporting": false 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "nvmf_subsystem_add_host", 00:23:13.882 "params": { 00:23:13.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.882 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.882 "psk": "key0" 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "nvmf_subsystem_add_ns", 00:23:13.882 "params": { 00:23:13.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.882 "namespace": { 00:23:13.882 "nsid": 1, 00:23:13.882 "bdev_name": "malloc0", 00:23:13.882 "nguid": "21E9CD8DC6DD4B96BEB9764FBBF12852", 00:23:13.882 "uuid": "21e9cd8d-c6dd-4b96-beb9-764fbbf12852", 00:23:13.882 "no_auto_visible": false 00:23:13.882 } 00:23:13.882 } 00:23:13.882 }, 00:23:13.882 { 00:23:13.882 "method": "nvmf_subsystem_add_listener", 00:23:13.882 "params": { 00:23:13.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.882 "listen_address": { 00:23:13.882 "trtype": "TCP", 00:23:13.882 "adrfam": "IPv4", 00:23:13.882 "traddr": "10.0.0.2", 00:23:13.882 "trsvcid": "4420" 00:23:13.882 }, 00:23:13.882 "secure_channel": true 00:23:13.882 } 00:23:13.882 } 00:23:13.882 ] 00:23:13.882 } 00:23:13.882 ] 00:23:13.882 }' 00:23:13.882 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:14.141 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:14.141 "subsystems": [ 00:23:14.141 { 00:23:14.141 "subsystem": "keyring", 00:23:14.141 "config": [ 00:23:14.141 { 00:23:14.141 "method": "keyring_file_add_key", 00:23:14.141 "params": { 00:23:14.141 "name": "key0", 00:23:14.141 "path": "/tmp/tmp.evnaAYHRVC" 00:23:14.141 } 00:23:14.141 } 00:23:14.141 ] 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "subsystem": "iobuf", 00:23:14.141 "config": [ 00:23:14.141 { 00:23:14.141 "method": "iobuf_set_options", 00:23:14.141 "params": { 00:23:14.141 "small_pool_count": 8192, 00:23:14.141 "large_pool_count": 1024, 00:23:14.141 "small_bufsize": 8192, 00:23:14.141 "large_bufsize": 135168 00:23:14.141 } 00:23:14.141 } 00:23:14.141 ] 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "subsystem": "sock", 00:23:14.141 "config": [ 00:23:14.141 { 00:23:14.141 "method": "sock_set_default_impl", 00:23:14.141 "params": { 00:23:14.141 "impl_name": "posix" 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "sock_impl_set_options", 00:23:14.141 "params": { 00:23:14.141 "impl_name": "ssl", 00:23:14.141 "recv_buf_size": 4096, 00:23:14.141 "send_buf_size": 4096, 00:23:14.141 "enable_recv_pipe": true, 00:23:14.141 "enable_quickack": false, 00:23:14.141 "enable_placement_id": 0, 00:23:14.141 "enable_zerocopy_send_server": true, 00:23:14.141 "enable_zerocopy_send_client": false, 00:23:14.141 "zerocopy_threshold": 0, 00:23:14.141 "tls_version": 0, 00:23:14.141 "enable_ktls": false 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "sock_impl_set_options", 00:23:14.141 "params": { 00:23:14.141 "impl_name": "posix", 00:23:14.141 "recv_buf_size": 2097152, 00:23:14.141 "send_buf_size": 2097152, 00:23:14.141 "enable_recv_pipe": true, 00:23:14.141 "enable_quickack": false, 00:23:14.141 "enable_placement_id": 0, 00:23:14.141 "enable_zerocopy_send_server": true, 00:23:14.141 "enable_zerocopy_send_client": false, 00:23:14.141 "zerocopy_threshold": 0, 00:23:14.141 "tls_version": 0, 00:23:14.141 "enable_ktls": false 00:23:14.141 } 00:23:14.141 } 00:23:14.141 ] 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "subsystem": "vmd", 00:23:14.141 "config": [] 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "subsystem": "accel", 00:23:14.141 "config": [ 00:23:14.141 { 00:23:14.141 "method": "accel_set_options", 00:23:14.141 "params": { 00:23:14.141 "small_cache_size": 128, 00:23:14.141 "large_cache_size": 16, 00:23:14.141 "task_count": 2048, 00:23:14.141 "sequence_count": 2048, 00:23:14.141 "buf_count": 2048 00:23:14.141 } 00:23:14.141 } 00:23:14.141 ] 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "subsystem": "bdev", 00:23:14.141 "config": [ 00:23:14.141 { 00:23:14.141 "method": "bdev_set_options", 00:23:14.141 "params": { 00:23:14.141 "bdev_io_pool_size": 65535, 00:23:14.141 "bdev_io_cache_size": 256, 00:23:14.141 "bdev_auto_examine": true, 00:23:14.141 "iobuf_small_cache_size": 128, 00:23:14.141 "iobuf_large_cache_size": 16 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "bdev_raid_set_options", 00:23:14.141 "params": { 00:23:14.141 "process_window_size_kb": 1024, 00:23:14.141 "process_max_bandwidth_mb_sec": 0 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "bdev_iscsi_set_options", 00:23:14.141 "params": { 00:23:14.141 "timeout_sec": 30 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "bdev_nvme_set_options", 00:23:14.141 "params": { 00:23:14.141 "action_on_timeout": "none", 00:23:14.141 "timeout_us": 0, 00:23:14.141 "timeout_admin_us": 0, 00:23:14.141 "keep_alive_timeout_ms": 10000, 00:23:14.141 "arbitration_burst": 0, 00:23:14.141 "low_priority_weight": 0, 00:23:14.141 "medium_priority_weight": 0, 00:23:14.141 "high_priority_weight": 0, 00:23:14.141 "nvme_adminq_poll_period_us": 10000, 00:23:14.141 "nvme_ioq_poll_period_us": 0, 00:23:14.141 "io_queue_requests": 512, 00:23:14.141 "delay_cmd_submit": true, 00:23:14.141 "transport_retry_count": 4, 00:23:14.141 "bdev_retry_count": 3, 00:23:14.141 "transport_ack_timeout": 0, 00:23:14.141 "ctrlr_loss_timeout_sec": 0, 00:23:14.141 "reconnect_delay_sec": 0, 00:23:14.141 "fast_io_fail_timeout_sec": 0, 00:23:14.141 "disable_auto_failback": false, 00:23:14.141 "generate_uuids": false, 00:23:14.141 "transport_tos": 0, 00:23:14.141 "nvme_error_stat": false, 00:23:14.141 "rdma_srq_size": 0, 00:23:14.141 "io_path_stat": false, 00:23:14.141 "allow_accel_sequence": false, 00:23:14.141 "rdma_max_cq_size": 0, 00:23:14.141 "rdma_cm_event_timeout_ms": 0, 00:23:14.141 "dhchap_digests": [ 00:23:14.141 "sha256", 00:23:14.141 "sha384", 00:23:14.141 "sha512" 00:23:14.141 ], 00:23:14.141 "dhchap_dhgroups": [ 00:23:14.141 "null", 00:23:14.141 "ffdhe2048", 00:23:14.141 "ffdhe3072", 00:23:14.141 "ffdhe4096", 00:23:14.141 "ffdhe6144", 00:23:14.141 "ffdhe8192" 00:23:14.141 ] 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "bdev_nvme_attach_controller", 00:23:14.141 "params": { 00:23:14.141 "name": "TLSTEST", 00:23:14.141 "trtype": "TCP", 00:23:14.141 "adrfam": "IPv4", 00:23:14.141 "traddr": "10.0.0.2", 00:23:14.141 "trsvcid": "4420", 00:23:14.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.141 "prchk_reftag": false, 00:23:14.141 "prchk_guard": false, 00:23:14.141 "ctrlr_loss_timeout_sec": 0, 00:23:14.141 "reconnect_delay_sec": 0, 00:23:14.141 "fast_io_fail_timeout_sec": 0, 00:23:14.141 "psk": "key0", 00:23:14.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.141 "hdgst": false, 00:23:14.141 "ddgst": false 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "bdev_nvme_set_hotplug", 00:23:14.141 "params": { 00:23:14.141 "period_us": 100000, 00:23:14.141 "enable": false 00:23:14.141 } 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "method": "bdev_wait_for_examine" 00:23:14.141 } 00:23:14.141 ] 00:23:14.141 }, 00:23:14.141 { 00:23:14.141 "subsystem": "nbd", 00:23:14.141 "config": [] 00:23:14.141 } 00:23:14.141 ] 00:23:14.141 }' 00:23:14.141 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3397420 00:23:14.141 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3397420 ']' 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3397420 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397420 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397420' 00:23:14.142 killing process with pid 3397420 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3397420 00:23:14.142 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.142 00:23:14.142 Latency(us) 00:23:14.142 [2024-12-16T04:51:47.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.142 [2024-12-16T04:51:47.998Z] =================================================================================================================== 00:23:14.142 [2024-12-16T04:51:47.998Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.142 05:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3397420 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3397079 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3397079 ']' 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3397079 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397079 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397079' 00:23:14.401 killing process with pid 3397079 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3397079 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3397079 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.401 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:14.401 "subsystems": [ 00:23:14.401 { 00:23:14.401 "subsystem": "keyring", 00:23:14.401 "config": [ 00:23:14.401 { 00:23:14.401 "method": "keyring_file_add_key", 00:23:14.401 "params": { 00:23:14.401 "name": "key0", 00:23:14.401 "path": "/tmp/tmp.evnaAYHRVC" 00:23:14.401 } 00:23:14.401 } 00:23:14.401 ] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "iobuf", 00:23:14.401 "config": [ 00:23:14.401 { 00:23:14.401 "method": "iobuf_set_options", 00:23:14.401 "params": { 00:23:14.401 "small_pool_count": 8192, 00:23:14.401 "large_pool_count": 1024, 00:23:14.401 "small_bufsize": 8192, 00:23:14.401 "large_bufsize": 135168 00:23:14.401 } 00:23:14.401 } 00:23:14.401 ] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "sock", 00:23:14.401 "config": [ 00:23:14.401 { 00:23:14.401 "method": "sock_set_default_impl", 00:23:14.401 "params": { 00:23:14.401 "impl_name": "posix" 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "sock_impl_set_options", 00:23:14.401 "params": { 00:23:14.401 "impl_name": "ssl", 00:23:14.401 "recv_buf_size": 4096, 00:23:14.401 "send_buf_size": 4096, 00:23:14.401 "enable_recv_pipe": true, 00:23:14.401 "enable_quickack": false, 00:23:14.401 "enable_placement_id": 0, 00:23:14.401 "enable_zerocopy_send_server": true, 00:23:14.401 "enable_zerocopy_send_client": false, 00:23:14.401 "zerocopy_threshold": 0, 00:23:14.401 "tls_version": 0, 00:23:14.401 "enable_ktls": false 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "sock_impl_set_options", 00:23:14.401 "params": { 00:23:14.401 "impl_name": "posix", 00:23:14.401 "recv_buf_size": 2097152, 00:23:14.401 "send_buf_size": 2097152, 00:23:14.401 "enable_recv_pipe": true, 00:23:14.401 "enable_quickack": false, 00:23:14.401 "enable_placement_id": 0, 00:23:14.401 "enable_zerocopy_send_server": true, 00:23:14.401 "enable_zerocopy_send_client": false, 00:23:14.401 "zerocopy_threshold": 0, 00:23:14.401 "tls_version": 0, 00:23:14.401 "enable_ktls": false 00:23:14.401 } 00:23:14.401 } 00:23:14.401 ] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "vmd", 00:23:14.401 "config": [] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "accel", 00:23:14.401 "config": [ 00:23:14.401 { 00:23:14.401 "method": "accel_set_options", 00:23:14.401 "params": { 00:23:14.401 "small_cache_size": 128, 00:23:14.401 "large_cache_size": 16, 00:23:14.401 "task_count": 2048, 00:23:14.401 "sequence_count": 2048, 00:23:14.401 "buf_count": 2048 00:23:14.401 } 00:23:14.401 } 00:23:14.401 ] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "bdev", 00:23:14.401 "config": [ 00:23:14.401 { 00:23:14.401 "method": "bdev_set_options", 00:23:14.401 "params": { 00:23:14.401 "bdev_io_pool_size": 65535, 00:23:14.401 "bdev_io_cache_size": 256, 00:23:14.401 "bdev_auto_examine": true, 00:23:14.401 "iobuf_small_cache_size": 128, 00:23:14.401 "iobuf_large_cache_size": 16 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "bdev_raid_set_options", 00:23:14.401 "params": { 00:23:14.401 "process_window_size_kb": 1024, 00:23:14.401 "process_max_bandwidth_mb_sec": 0 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "bdev_iscsi_set_options", 00:23:14.401 "params": { 00:23:14.401 "timeout_sec": 30 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "bdev_nvme_set_options", 00:23:14.401 "params": { 00:23:14.401 "action_on_timeout": "none", 00:23:14.401 "timeout_us": 0, 00:23:14.401 "timeout_admin_us": 0, 00:23:14.401 "keep_alive_timeout_ms": 10000, 00:23:14.401 "arbitration_burst": 0, 00:23:14.401 "low_priority_weight": 0, 00:23:14.401 "medium_priority_weight": 0, 00:23:14.401 "high_priority_weight": 0, 00:23:14.401 "nvme_adminq_poll_period_us": 10000, 00:23:14.401 "nvme_ioq_poll_period_us": 0, 00:23:14.401 "io_queue_requests": 0, 00:23:14.401 "delay_cmd_submit": true, 00:23:14.401 "transport_retry_count": 4, 00:23:14.401 "bdev_retry_count": 3, 00:23:14.401 "transport_ack_timeout": 0, 00:23:14.401 "ctrlr_loss_timeout_sec": 0, 00:23:14.401 "reconnect_delay_sec": 0, 00:23:14.401 "fast_io_fail_timeout_sec": 0, 00:23:14.401 "disable_auto_failback": false, 00:23:14.401 "generate_uuids": false, 00:23:14.401 "transport_tos": 0, 00:23:14.401 "nvme_error_stat": false, 00:23:14.401 "rdma_srq_size": 0, 00:23:14.401 "io_path_stat": false, 00:23:14.401 "allow_accel_sequence": false, 00:23:14.401 "rdma_max_cq_size": 0, 00:23:14.401 "rdma_cm_event_timeout_ms": 0, 00:23:14.401 "dhchap_digests": [ 00:23:14.401 "sha256", 00:23:14.401 "sha384", 00:23:14.401 "sha512" 00:23:14.401 ], 00:23:14.401 "dhchap_dhgroups": [ 00:23:14.401 "null", 00:23:14.401 "ffdhe2048", 00:23:14.401 "ffdhe3072", 00:23:14.401 "ffdhe4096", 00:23:14.401 "ffdhe6144", 00:23:14.401 "ffdhe8192" 00:23:14.401 ] 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "bdev_nvme_set_hotplug", 00:23:14.401 "params": { 00:23:14.401 "period_us": 100000, 00:23:14.401 "enable": false 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "bdev_malloc_create", 00:23:14.401 "params": { 00:23:14.401 "name": "malloc0", 00:23:14.401 "num_blocks": 8192, 00:23:14.401 "block_size": 4096, 00:23:14.401 "physical_block_size": 4096, 00:23:14.401 "uuid": "21e9cd8d-c6dd-4b96-beb9-764fbbf12852", 00:23:14.401 "optimal_io_boundary": 0, 00:23:14.401 "md_size": 0, 00:23:14.401 "dif_type": 0, 00:23:14.401 "dif_is_head_of_md": false, 00:23:14.401 "dif_pi_format": 0 00:23:14.401 } 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "method": "bdev_wait_for_examine" 00:23:14.401 } 00:23:14.401 ] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "nbd", 00:23:14.401 "config": [] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "scheduler", 00:23:14.401 "config": [ 00:23:14.401 { 00:23:14.401 "method": "framework_set_scheduler", 00:23:14.401 "params": { 00:23:14.401 "name": "static" 00:23:14.401 } 00:23:14.401 } 00:23:14.401 ] 00:23:14.401 }, 00:23:14.401 { 00:23:14.401 "subsystem": "nvmf", 00:23:14.401 "config": [ 00:23:14.401 { 00:23:14.402 "method": "nvmf_set_config", 00:23:14.402 "params": { 00:23:14.402 "discovery_filter": "match_any", 00:23:14.402 "admin_cmd_passthru": { 00:23:14.402 "identify_ctrlr": false 00:23:14.402 }, 00:23:14.402 "dhchap_digests": [ 00:23:14.402 "sha256", 00:23:14.402 "sha384", 00:23:14.402 "sha512" 00:23:14.402 ], 00:23:14.402 "dhchap_dhgroups": [ 00:23:14.402 "null", 00:23:14.402 "ffdhe2048", 00:23:14.402 "ffdhe3072", 00:23:14.402 "ffdhe4096", 00:23:14.402 "ffdhe6144", 00:23:14.402 "ffdhe8192" 00:23:14.402 ] 00:23:14.402 } 00:23:14.402 }, 00:23:14.402 { 00:23:14.402 "method": "nvmf_set_max_subsystems", 00:23:14.402 "params": { 00:23:14.402 "max_subsystems": 1024 00:23:14.402 } 00:23:14.402 }, 00:23:14.402 { 00:23:14.402 "method": "nvmf_set_crdt", 00:23:14.402 "params": { 00:23:14.402 "crdt1": 0, 00:23:14.402 "crdt2": 0, 00:23:14.402 "crdt3": 0 00:23:14.402 } 00:23:14.402 }, 00:23:14.402 { 00:23:14.402 "method": "nvmf_create_transport", 00:23:14.402 "params": { 00:23:14.402 "trtype": "TCP", 00:23:14.402 "max_queue_depth": 128, 00:23:14.402 "max_io_qpairs_per_ctrlr": 127, 00:23:14.402 "in_capsule_data_size": 4096, 00:23:14.402 "max_io_size": 131072, 00:23:14.402 "io_unit_size": 131072, 00:23:14.402 "max_aq_depth": 128, 00:23:14.402 "num_shared_buffers": 511, 00:23:14.402 "buf_cache_size": 4294967295, 00:23:14.402 "dif_insert_or_strip": false, 00:23:14.402 "zcopy": false, 00:23:14.402 "c2h_success": false, 00:23:14.402 "sock_priority": 0, 00:23:14.402 "abort_timeout_sec": 1, 00:23:14.402 "ack_timeout": 0, 00:23:14.402 "data_wr_pool_size": 0 00:23:14.402 } 00:23:14.402 }, 00:23:14.402 { 00:23:14.402 "method": "nvmf_create_subsystem", 00:23:14.402 "params": { 00:23:14.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.402 "allow_any_host": false, 00:23:14.402 "serial_number": "SPDK00000000000001", 00:23:14.402 "model_number": "SPDK bdev Controller", 00:23:14.402 "max_namespaces": 10, 00:23:14.402 "min_cntlid": 1, 00:23:14.402 "max_cntlid": 65519, 00:23:14.402 "ana_reporting": false 00:23:14.402 } 00:23:14.402 }, 00:23:14.402 { 00:23:14.402 "method": "nvmf_subsystem_add_host", 00:23:14.402 "params": { 00:23:14.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.402 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.402 "psk": "key0" 00:23:14.402 } 00:23:14.402 }, 00:23:14.402 { 00:23:14.402 "method": "nvmf_subsystem_add_ns", 00:23:14.402 "params": { 00:23:14.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.402 "namespace": { 00:23:14.402 "nsid": 1, 00:23:14.402 "bdev_name": "malloc0", 00:23:14.402 "nguid": "21E9CD8DC6DD4B96BEB9764FBBF12852", 00:23:14.402 "uuid": "21e9cd8d-c6dd-4b96-beb9-764fbbf12852", 00:23:14.402 "no_auto_visible": false 00:23:14.402 } 00:23:14.402 } 00:23:14.402 }, 00:23:14.402 { 00:23:14.402 "method": "nvmf_subsystem_add_listener", 00:23:14.402 "params": { 00:23:14.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.402 "listen_address": { 00:23:14.402 "trtype": "TCP", 00:23:14.402 "adrfam": "IPv4", 00:23:14.402 "traddr": "10.0.0.2", 00:23:14.402 "trsvcid": "4420" 00:23:14.402 }, 00:23:14.402 "secure_channel": true 00:23:14.402 } 00:23:14.402 } 00:23:14.402 ] 00:23:14.402 } 00:23:14.402 ] 00:23:14.402 }' 00:23:14.402 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.660 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3397736 00:23:14.660 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:14.660 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3397736 00:23:14.661 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3397736 ']' 00:23:14.661 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.661 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.661 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.661 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.661 05:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.661 [2024-12-16 05:51:48.302532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:14.661 [2024-12-16 05:51:48.302579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.661 [2024-12-16 05:51:48.359480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.661 [2024-12-16 05:51:48.395518] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.661 [2024-12-16 05:51:48.395559] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.661 [2024-12-16 05:51:48.395566] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.661 [2024-12-16 05:51:48.395574] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.661 [2024-12-16 05:51:48.395579] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.661 [2024-12-16 05:51:48.395635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.919 [2024-12-16 05:51:48.616223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.919 [2024-12-16 05:51:48.648243] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.919 [2024-12-16 05:51:48.648430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3397771 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3397771 /var/tmp/bdevperf.sock 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3397771 ']' 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:15.485 05:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:15.485 "subsystems": [ 00:23:15.485 { 00:23:15.485 "subsystem": "keyring", 00:23:15.485 "config": [ 00:23:15.485 { 00:23:15.485 "method": "keyring_file_add_key", 00:23:15.485 "params": { 00:23:15.485 "name": "key0", 00:23:15.485 "path": "/tmp/tmp.evnaAYHRVC" 00:23:15.485 } 00:23:15.485 } 00:23:15.485 ] 00:23:15.485 }, 00:23:15.485 { 00:23:15.485 "subsystem": "iobuf", 00:23:15.485 "config": [ 00:23:15.485 { 00:23:15.485 "method": "iobuf_set_options", 00:23:15.485 "params": { 00:23:15.486 "small_pool_count": 8192, 00:23:15.486 "large_pool_count": 1024, 00:23:15.486 "small_bufsize": 8192, 00:23:15.486 "large_bufsize": 135168 00:23:15.486 } 00:23:15.486 } 00:23:15.486 ] 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "subsystem": "sock", 00:23:15.486 "config": [ 00:23:15.486 { 00:23:15.486 "method": "sock_set_default_impl", 00:23:15.486 "params": { 00:23:15.486 "impl_name": "posix" 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "sock_impl_set_options", 00:23:15.486 "params": { 00:23:15.486 "impl_name": "ssl", 00:23:15.486 "recv_buf_size": 4096, 00:23:15.486 "send_buf_size": 4096, 00:23:15.486 "enable_recv_pipe": true, 00:23:15.486 "enable_quickack": false, 00:23:15.486 "enable_placement_id": 0, 00:23:15.486 "enable_zerocopy_send_server": true, 00:23:15.486 "enable_zerocopy_send_client": false, 00:23:15.486 "zerocopy_threshold": 0, 00:23:15.486 "tls_version": 0, 00:23:15.486 "enable_ktls": false 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "sock_impl_set_options", 00:23:15.486 "params": { 00:23:15.486 "impl_name": "posix", 00:23:15.486 "recv_buf_size": 2097152, 00:23:15.486 "send_buf_size": 2097152, 00:23:15.486 "enable_recv_pipe": true, 00:23:15.486 "enable_quickack": false, 00:23:15.486 "enable_placement_id": 0, 00:23:15.486 "enable_zerocopy_send_server": true, 00:23:15.486 "enable_zerocopy_send_client": false, 00:23:15.486 "zerocopy_threshold": 0, 00:23:15.486 "tls_version": 0, 00:23:15.486 "enable_ktls": false 00:23:15.486 } 00:23:15.486 } 00:23:15.486 ] 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "subsystem": "vmd", 00:23:15.486 "config": [] 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "subsystem": "accel", 00:23:15.486 "config": [ 00:23:15.486 { 00:23:15.486 "method": "accel_set_options", 00:23:15.486 "params": { 00:23:15.486 "small_cache_size": 128, 00:23:15.486 "large_cache_size": 16, 00:23:15.486 "task_count": 2048, 00:23:15.486 "sequence_count": 2048, 00:23:15.486 "buf_count": 2048 00:23:15.486 } 00:23:15.486 } 00:23:15.486 ] 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "subsystem": "bdev", 00:23:15.486 "config": [ 00:23:15.486 { 00:23:15.486 "method": "bdev_set_options", 00:23:15.486 "params": { 00:23:15.486 "bdev_io_pool_size": 65535, 00:23:15.486 "bdev_io_cache_size": 256, 00:23:15.486 "bdev_auto_examine": true, 00:23:15.486 "iobuf_small_cache_size": 128, 00:23:15.486 "iobuf_large_cache_size": 16 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "bdev_raid_set_options", 00:23:15.486 "params": { 00:23:15.486 "process_window_size_kb": 1024, 00:23:15.486 "process_max_bandwidth_mb_sec": 0 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "bdev_iscsi_set_options", 00:23:15.486 "params": { 00:23:15.486 "timeout_sec": 30 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "bdev_nvme_set_options", 00:23:15.486 "params": { 00:23:15.486 "action_on_timeout": "none", 00:23:15.486 "timeout_us": 0, 00:23:15.486 "timeout_admin_us": 0, 00:23:15.486 "keep_alive_timeout_ms": 10000, 00:23:15.486 "arbitration_burst": 0, 00:23:15.486 "low_priority_weight": 0, 00:23:15.486 "medium_priority_weight": 0, 00:23:15.486 "high_priority_weight": 0, 00:23:15.486 "nvme_adminq_poll_period_us": 10000, 00:23:15.486 "nvme_ioq_poll_period_us": 0, 00:23:15.486 "io_queue_requests": 512, 00:23:15.486 "delay_cmd_submit": true, 00:23:15.486 "transport_retry_count": 4, 00:23:15.486 "bdev_retry_count": 3, 00:23:15.486 "transport_ack_timeout": 0, 00:23:15.486 "ctrlr_loss_timeout_sec": 0, 00:23:15.486 "reconnect_delay_sec": 0, 00:23:15.486 "fast_io_fail_timeout_sec": 0, 00:23:15.486 "disable_auto_failback": false, 00:23:15.486 "generate_uuids": false, 00:23:15.486 "transport_tos": 0, 00:23:15.486 "nvme_error_stat": false, 00:23:15.486 "rdma_srq_size": 0, 00:23:15.486 "io_path_stat": false, 00:23:15.486 "allow_accel_sequence": false, 00:23:15.486 "rdma_max_cq_size": 0, 00:23:15.486 "rdma_cm_event_timeout_ms": 0, 00:23:15.486 "dhchap_digests": [ 00:23:15.486 "sha256", 00:23:15.486 "sha384", 00:23:15.486 "sha512" 00:23:15.486 ], 00:23:15.486 "dhchap_dhgroups": [ 00:23:15.486 "null", 00:23:15.486 "ffdhe2048", 00:23:15.486 "ffdhe3072", 00:23:15.486 "ffdhe4096", 00:23:15.486 "ffdhe6144", 00:23:15.486 "ffdhe8192" 00:23:15.486 ] 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "bdev_nvme_attach_controller", 00:23:15.486 "params": { 00:23:15.486 "name": "TLSTEST", 00:23:15.486 "trtype": "TCP", 00:23:15.486 "adrfam": "IPv4", 00:23:15.486 "traddr": "10.0.0.2", 00:23:15.486 "trsvcid": "4420", 00:23:15.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.486 "prchk_reftag": false, 00:23:15.486 "prchk_guard": false, 00:23:15.486 "ctrlr_loss_timeout_sec": 0, 00:23:15.486 "reconnect_delay_sec": 0, 00:23:15.486 "fast_io_fail_timeout_sec": 0, 00:23:15.486 "psk": "key0", 00:23:15.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.486 "hdgst": false, 00:23:15.486 "ddgst": false 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "bdev_nvme_set_hotplug", 00:23:15.486 "params": { 00:23:15.486 "period_us": 100000, 00:23:15.486 "enable": false 00:23:15.486 } 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "method": "bdev_wait_for_examine" 00:23:15.486 } 00:23:15.486 ] 00:23:15.486 }, 00:23:15.486 { 00:23:15.486 "subsystem": "nbd", 00:23:15.486 "config": [] 00:23:15.486 } 00:23:15.486 ] 00:23:15.486 }' 00:23:15.486 [2024-12-16 05:51:49.203198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:15.486 [2024-12-16 05:51:49.203244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397771 ] 00:23:15.486 [2024-12-16 05:51:49.252518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.486 [2024-12-16 05:51:49.292357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.745 [2024-12-16 05:51:49.439127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.311 05:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.311 05:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.311 05:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:16.311 Running I/O for 10 seconds... 00:23:18.618 5310.00 IOPS, 20.74 MiB/s [2024-12-16T04:51:53.407Z] 5400.50 IOPS, 21.10 MiB/s [2024-12-16T04:51:54.341Z] 5315.00 IOPS, 20.76 MiB/s [2024-12-16T04:51:55.277Z] 5318.75 IOPS, 20.78 MiB/s [2024-12-16T04:51:56.213Z] 5397.00 IOPS, 21.08 MiB/s [2024-12-16T04:51:57.147Z] 5446.00 IOPS, 21.27 MiB/s [2024-12-16T04:51:58.520Z] 5492.14 IOPS, 21.45 MiB/s [2024-12-16T04:51:59.454Z] 5505.62 IOPS, 21.51 MiB/s [2024-12-16T04:52:00.558Z] 5525.67 IOPS, 21.58 MiB/s [2024-12-16T04:52:00.558Z] 5541.30 IOPS, 21.65 MiB/s 00:23:26.702 Latency(us) 00:23:26.702 [2024-12-16T04:52:00.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.703 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.703 Verification LBA range: start 0x0 length 0x2000 00:23:26.703 TLSTESTn1 : 10.02 5545.05 21.66 0.00 0.00 23049.17 6303.94 23967.45 00:23:26.703 [2024-12-16T04:52:00.559Z] =================================================================================================================== 00:23:26.703 [2024-12-16T04:52:00.559Z] Total : 5545.05 21.66 0.00 0.00 23049.17 6303.94 23967.45 00:23:26.703 { 00:23:26.703 "results": [ 00:23:26.703 { 00:23:26.703 "job": "TLSTESTn1", 00:23:26.703 "core_mask": "0x4", 00:23:26.703 "workload": "verify", 00:23:26.703 "status": "finished", 00:23:26.703 "verify_range": { 00:23:26.703 "start": 0, 00:23:26.703 "length": 8192 00:23:26.703 }, 00:23:26.703 "queue_depth": 128, 00:23:26.703 "io_size": 4096, 00:23:26.703 "runtime": 10.016144, 00:23:26.703 "iops": 5545.0480743887065, 00:23:26.703 "mibps": 21.660344040580885, 00:23:26.703 "io_failed": 0, 00:23:26.703 "io_timeout": 0, 00:23:26.703 "avg_latency_us": 23049.172857039972, 00:23:26.703 "min_latency_us": 6303.939047619047, 00:23:26.703 "max_latency_us": 23967.45142857143 00:23:26.703 } 00:23:26.703 ], 00:23:26.703 "core_count": 1 00:23:26.703 } 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3397771 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3397771 ']' 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3397771 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397771 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397771' 00:23:26.703 killing process with pid 3397771 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3397771 00:23:26.703 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.703 00:23:26.703 Latency(us) 00:23:26.703 [2024-12-16T04:52:00.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.703 [2024-12-16T04:52:00.559Z] =================================================================================================================== 00:23:26.703 [2024-12-16T04:52:00.559Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3397771 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3397736 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3397736 ']' 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3397736 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397736 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397736' 00:23:26.703 killing process with pid 3397736 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3397736 00:23:26.703 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3397736 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3399683 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3399683 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3399683 ']' 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.962 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.962 [2024-12-16 05:52:00.720950] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:26.962 [2024-12-16 05:52:00.721000] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.962 [2024-12-16 05:52:00.780657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.220 [2024-12-16 05:52:00.817974] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.220 [2024-12-16 05:52:00.818011] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.220 [2024-12-16 05:52:00.818019] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.220 [2024-12-16 05:52:00.818028] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.220 [2024-12-16 05:52:00.818033] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.220 [2024-12-16 05:52:00.818052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.evnaAYHRVC 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.evnaAYHRVC 00:23:27.220 05:52:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:27.479 [2024-12-16 05:52:01.106977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.479 05:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:27.479 05:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:27.738 [2024-12-16 05:52:01.475918] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.738 [2024-12-16 05:52:01.476133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.738 05:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.996 malloc0 00:23:27.996 05:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:28.255 05:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:23:28.255 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3400027 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3400027 /var/tmp/bdevperf.sock 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3400027 ']' 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.514 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.514 [2024-12-16 05:52:02.243553] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:28.514 [2024-12-16 05:52:02.243598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400027 ] 00:23:28.514 [2024-12-16 05:52:02.297442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.514 [2024-12-16 05:52:02.337685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.773 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.773 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:28.773 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:23:28.773 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:29.031 [2024-12-16 05:52:02.771722] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.031 nvme0n1 00:23:29.031 05:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.290 Running I/O for 1 seconds... 00:23:30.226 5358.00 IOPS, 20.93 MiB/s 00:23:30.226 Latency(us) 00:23:30.226 [2024-12-16T04:52:04.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.226 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.226 Verification LBA range: start 0x0 length 0x2000 00:23:30.226 nvme0n1 : 1.03 5327.56 20.81 0.00 0.00 23683.91 6491.18 28586.18 00:23:30.226 [2024-12-16T04:52:04.082Z] =================================================================================================================== 00:23:30.226 [2024-12-16T04:52:04.082Z] Total : 5327.56 20.81 0.00 0.00 23683.91 6491.18 28586.18 00:23:30.226 { 00:23:30.226 "results": [ 00:23:30.226 { 00:23:30.226 "job": "nvme0n1", 00:23:30.226 "core_mask": "0x2", 00:23:30.226 "workload": "verify", 00:23:30.226 "status": "finished", 00:23:30.226 "verify_range": { 00:23:30.226 "start": 0, 00:23:30.226 "length": 8192 00:23:30.226 }, 00:23:30.226 "queue_depth": 128, 00:23:30.226 "io_size": 4096, 00:23:30.226 "runtime": 1.02974, 00:23:30.226 "iops": 5327.5584128032315, 00:23:30.226 "mibps": 20.810775050012623, 00:23:30.226 "io_failed": 0, 00:23:30.226 "io_timeout": 0, 00:23:30.226 "avg_latency_us": 23683.91206291339, 00:23:30.226 "min_latency_us": 6491.184761904762, 00:23:30.226 "max_latency_us": 28586.179047619047 00:23:30.226 } 00:23:30.226 ], 00:23:30.226 "core_count": 1 00:23:30.226 } 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3400027 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3400027 ']' 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3400027 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3400027 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3400027' 00:23:30.226 killing process with pid 3400027 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3400027 00:23:30.226 Received shutdown signal, test time was about 1.000000 seconds 00:23:30.226 00:23:30.226 Latency(us) 00:23:30.226 [2024-12-16T04:52:04.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.226 [2024-12-16T04:52:04.082Z] =================================================================================================================== 00:23:30.226 [2024-12-16T04:52:04.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.226 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3400027 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3399683 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3399683 ']' 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3399683 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3399683 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3399683' 00:23:30.485 killing process with pid 3399683 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3399683 00:23:30.485 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3399683 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3400272 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3400272 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3400272 ']' 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.744 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.744 [2024-12-16 05:52:04.518932] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:30.744 [2024-12-16 05:52:04.518984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.744 [2024-12-16 05:52:04.579744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.002 [2024-12-16 05:52:04.615021] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.002 [2024-12-16 05:52:04.615073] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.002 [2024-12-16 05:52:04.615102] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.002 [2024-12-16 05:52:04.615110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.002 [2024-12-16 05:52:04.615116] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.002 [2024-12-16 05:52:04.615135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.002 [2024-12-16 05:52:04.744343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.002 malloc0 00:23:31.002 [2024-12-16 05:52:04.782022] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.002 [2024-12-16 05:52:04.782226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3400413 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3400413 /var/tmp/bdevperf.sock 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3400413 ']' 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:31.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.002 05:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.260 [2024-12-16 05:52:04.858479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:31.260 [2024-12-16 05:52:04.858521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400413 ] 00:23:31.260 [2024-12-16 05:52:04.913616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.260 [2024-12-16 05:52:04.953519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.260 05:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.260 05:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:31.260 05:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.evnaAYHRVC 00:23:31.518 05:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:31.775 [2024-12-16 05:52:05.398712] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.775 nvme0n1 00:23:31.775 05:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.775 Running I/O for 1 seconds... 00:23:32.968 5343.00 IOPS, 20.87 MiB/s 00:23:32.968 Latency(us) 00:23:32.968 [2024-12-16T04:52:06.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.968 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.968 Verification LBA range: start 0x0 length 0x2000 00:23:32.968 nvme0n1 : 1.02 5389.51 21.05 0.00 0.00 23570.35 6397.56 27962.03 00:23:32.968 [2024-12-16T04:52:06.824Z] =================================================================================================================== 00:23:32.968 [2024-12-16T04:52:06.824Z] Total : 5389.51 21.05 0.00 0.00 23570.35 6397.56 27962.03 00:23:32.968 { 00:23:32.968 "results": [ 00:23:32.968 { 00:23:32.968 "job": "nvme0n1", 00:23:32.968 "core_mask": "0x2", 00:23:32.968 "workload": "verify", 00:23:32.968 "status": "finished", 00:23:32.968 "verify_range": { 00:23:32.968 "start": 0, 00:23:32.968 "length": 8192 00:23:32.968 }, 00:23:32.968 "queue_depth": 128, 00:23:32.968 "io_size": 4096, 00:23:32.968 "runtime": 1.01512, 00:23:32.968 "iops": 5389.510599732052, 00:23:32.968 "mibps": 21.052775780203326, 00:23:32.968 "io_failed": 0, 00:23:32.968 "io_timeout": 0, 00:23:32.968 "avg_latency_us": 23570.345741964124, 00:23:32.968 "min_latency_us": 6397.561904761905, 00:23:32.968 "max_latency_us": 27962.02666666667 00:23:32.968 } 00:23:32.968 ], 00:23:32.968 "core_count": 1 00:23:32.968 } 00:23:32.968 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:32.968 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.968 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.968 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.968 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:32.968 "subsystems": [ 00:23:32.968 { 00:23:32.968 "subsystem": "keyring", 00:23:32.968 "config": [ 00:23:32.968 { 00:23:32.968 "method": "keyring_file_add_key", 00:23:32.968 "params": { 00:23:32.968 "name": "key0", 00:23:32.968 "path": "/tmp/tmp.evnaAYHRVC" 00:23:32.968 } 00:23:32.968 } 00:23:32.968 ] 00:23:32.968 }, 00:23:32.968 { 00:23:32.968 "subsystem": "iobuf", 00:23:32.968 "config": [ 00:23:32.968 { 00:23:32.968 "method": "iobuf_set_options", 00:23:32.968 "params": { 00:23:32.968 "small_pool_count": 8192, 00:23:32.968 "large_pool_count": 1024, 00:23:32.968 "small_bufsize": 8192, 00:23:32.968 "large_bufsize": 135168 00:23:32.968 } 00:23:32.968 } 00:23:32.968 ] 00:23:32.968 }, 00:23:32.968 { 00:23:32.968 "subsystem": "sock", 00:23:32.969 "config": [ 00:23:32.969 { 00:23:32.969 "method": "sock_set_default_impl", 00:23:32.969 "params": { 00:23:32.969 "impl_name": "posix" 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "sock_impl_set_options", 00:23:32.969 "params": { 00:23:32.969 "impl_name": "ssl", 00:23:32.969 "recv_buf_size": 4096, 00:23:32.969 "send_buf_size": 4096, 00:23:32.969 "enable_recv_pipe": true, 00:23:32.969 "enable_quickack": false, 00:23:32.969 "enable_placement_id": 0, 00:23:32.969 "enable_zerocopy_send_server": true, 00:23:32.969 "enable_zerocopy_send_client": false, 00:23:32.969 "zerocopy_threshold": 0, 00:23:32.969 "tls_version": 0, 00:23:32.969 "enable_ktls": false 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "sock_impl_set_options", 00:23:32.969 "params": { 00:23:32.969 "impl_name": "posix", 00:23:32.969 "recv_buf_size": 2097152, 00:23:32.969 "send_buf_size": 2097152, 00:23:32.969 "enable_recv_pipe": true, 00:23:32.969 "enable_quickack": false, 00:23:32.969 "enable_placement_id": 0, 00:23:32.969 "enable_zerocopy_send_server": true, 00:23:32.969 "enable_zerocopy_send_client": false, 00:23:32.969 "zerocopy_threshold": 0, 00:23:32.969 "tls_version": 0, 00:23:32.969 "enable_ktls": false 00:23:32.969 } 00:23:32.969 } 00:23:32.969 ] 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "subsystem": "vmd", 00:23:32.969 "config": [] 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "subsystem": "accel", 00:23:32.969 "config": [ 00:23:32.969 { 00:23:32.969 "method": "accel_set_options", 00:23:32.969 "params": { 00:23:32.969 "small_cache_size": 128, 00:23:32.969 "large_cache_size": 16, 00:23:32.969 "task_count": 2048, 00:23:32.969 "sequence_count": 2048, 00:23:32.969 "buf_count": 2048 00:23:32.969 } 00:23:32.969 } 00:23:32.969 ] 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "subsystem": "bdev", 00:23:32.969 "config": [ 00:23:32.969 { 00:23:32.969 "method": "bdev_set_options", 00:23:32.969 "params": { 00:23:32.969 "bdev_io_pool_size": 65535, 00:23:32.969 "bdev_io_cache_size": 256, 00:23:32.969 "bdev_auto_examine": true, 00:23:32.969 "iobuf_small_cache_size": 128, 00:23:32.969 "iobuf_large_cache_size": 16 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "bdev_raid_set_options", 00:23:32.969 "params": { 00:23:32.969 "process_window_size_kb": 1024, 00:23:32.969 "process_max_bandwidth_mb_sec": 0 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "bdev_iscsi_set_options", 00:23:32.969 "params": { 00:23:32.969 "timeout_sec": 30 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "bdev_nvme_set_options", 00:23:32.969 "params": { 00:23:32.969 "action_on_timeout": "none", 00:23:32.969 "timeout_us": 0, 00:23:32.969 "timeout_admin_us": 0, 00:23:32.969 "keep_alive_timeout_ms": 10000, 00:23:32.969 "arbitration_burst": 0, 00:23:32.969 "low_priority_weight": 0, 00:23:32.969 "medium_priority_weight": 0, 00:23:32.969 "high_priority_weight": 0, 00:23:32.969 "nvme_adminq_poll_period_us": 10000, 00:23:32.969 "nvme_ioq_poll_period_us": 0, 00:23:32.969 "io_queue_requests": 0, 00:23:32.969 "delay_cmd_submit": true, 00:23:32.969 "transport_retry_count": 4, 00:23:32.969 "bdev_retry_count": 3, 00:23:32.969 "transport_ack_timeout": 0, 00:23:32.969 "ctrlr_loss_timeout_sec": 0, 00:23:32.969 "reconnect_delay_sec": 0, 00:23:32.969 "fast_io_fail_timeout_sec": 0, 00:23:32.969 "disable_auto_failback": false, 00:23:32.969 "generate_uuids": false, 00:23:32.969 "transport_tos": 0, 00:23:32.969 "nvme_error_stat": false, 00:23:32.969 "rdma_srq_size": 0, 00:23:32.969 "io_path_stat": false, 00:23:32.969 "allow_accel_sequence": false, 00:23:32.969 "rdma_max_cq_size": 0, 00:23:32.969 "rdma_cm_event_timeout_ms": 0, 00:23:32.969 "dhchap_digests": [ 00:23:32.969 "sha256", 00:23:32.969 "sha384", 00:23:32.969 "sha512" 00:23:32.969 ], 00:23:32.969 "dhchap_dhgroups": [ 00:23:32.969 "null", 00:23:32.969 "ffdhe2048", 00:23:32.969 "ffdhe3072", 00:23:32.969 "ffdhe4096", 00:23:32.969 "ffdhe6144", 00:23:32.969 "ffdhe8192" 00:23:32.969 ] 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "bdev_nvme_set_hotplug", 00:23:32.969 "params": { 00:23:32.969 "period_us": 100000, 00:23:32.969 "enable": false 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "bdev_malloc_create", 00:23:32.969 "params": { 00:23:32.969 "name": "malloc0", 00:23:32.969 "num_blocks": 8192, 00:23:32.969 "block_size": 4096, 00:23:32.969 "physical_block_size": 4096, 00:23:32.969 "uuid": "cbb1c756-01b2-4dcf-8600-85625a5f23e4", 00:23:32.969 "optimal_io_boundary": 0, 00:23:32.969 "md_size": 0, 00:23:32.969 "dif_type": 0, 00:23:32.969 "dif_is_head_of_md": false, 00:23:32.969 "dif_pi_format": 0 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "bdev_wait_for_examine" 00:23:32.969 } 00:23:32.969 ] 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "subsystem": "nbd", 00:23:32.969 "config": [] 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "subsystem": "scheduler", 00:23:32.969 "config": [ 00:23:32.969 { 00:23:32.969 "method": "framework_set_scheduler", 00:23:32.969 "params": { 00:23:32.969 "name": "static" 00:23:32.969 } 00:23:32.969 } 00:23:32.969 ] 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "subsystem": "nvmf", 00:23:32.969 "config": [ 00:23:32.969 { 00:23:32.969 "method": "nvmf_set_config", 00:23:32.969 "params": { 00:23:32.969 "discovery_filter": "match_any", 00:23:32.969 "admin_cmd_passthru": { 00:23:32.969 "identify_ctrlr": false 00:23:32.969 }, 00:23:32.969 "dhchap_digests": [ 00:23:32.969 "sha256", 00:23:32.969 "sha384", 00:23:32.969 "sha512" 00:23:32.969 ], 00:23:32.969 "dhchap_dhgroups": [ 00:23:32.969 "null", 00:23:32.969 "ffdhe2048", 00:23:32.969 "ffdhe3072", 00:23:32.969 "ffdhe4096", 00:23:32.969 "ffdhe6144", 00:23:32.969 "ffdhe8192" 00:23:32.969 ] 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "nvmf_set_max_subsystems", 00:23:32.969 "params": { 00:23:32.969 "max_subsystems": 1024 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "nvmf_set_crdt", 00:23:32.969 "params": { 00:23:32.969 "crdt1": 0, 00:23:32.969 "crdt2": 0, 00:23:32.969 "crdt3": 0 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "nvmf_create_transport", 00:23:32.969 "params": { 00:23:32.969 "trtype": "TCP", 00:23:32.969 "max_queue_depth": 128, 00:23:32.969 "max_io_qpairs_per_ctrlr": 127, 00:23:32.969 "in_capsule_data_size": 4096, 00:23:32.969 "max_io_size": 131072, 00:23:32.969 "io_unit_size": 131072, 00:23:32.969 "max_aq_depth": 128, 00:23:32.969 "num_shared_buffers": 511, 00:23:32.969 "buf_cache_size": 4294967295, 00:23:32.969 "dif_insert_or_strip": false, 00:23:32.969 "zcopy": false, 00:23:32.969 "c2h_success": false, 00:23:32.969 "sock_priority": 0, 00:23:32.969 "abort_timeout_sec": 1, 00:23:32.969 "ack_timeout": 0, 00:23:32.969 "data_wr_pool_size": 0 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "nvmf_create_subsystem", 00:23:32.969 "params": { 00:23:32.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.969 "allow_any_host": false, 00:23:32.969 "serial_number": "00000000000000000000", 00:23:32.969 "model_number": "SPDK bdev Controller", 00:23:32.969 "max_namespaces": 32, 00:23:32.969 "min_cntlid": 1, 00:23:32.969 "max_cntlid": 65519, 00:23:32.969 "ana_reporting": false 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "nvmf_subsystem_add_host", 00:23:32.969 "params": { 00:23:32.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.969 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.969 "psk": "key0" 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "nvmf_subsystem_add_ns", 00:23:32.969 "params": { 00:23:32.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.969 "namespace": { 00:23:32.969 "nsid": 1, 00:23:32.969 "bdev_name": "malloc0", 00:23:32.969 "nguid": "CBB1C75601B24DCF860085625A5F23E4", 00:23:32.969 "uuid": "cbb1c756-01b2-4dcf-8600-85625a5f23e4", 00:23:32.969 "no_auto_visible": false 00:23:32.969 } 00:23:32.969 } 00:23:32.969 }, 00:23:32.969 { 00:23:32.969 "method": "nvmf_subsystem_add_listener", 00:23:32.969 "params": { 00:23:32.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.969 "listen_address": { 00:23:32.969 "trtype": "TCP", 00:23:32.969 "adrfam": "IPv4", 00:23:32.969 "traddr": "10.0.0.2", 00:23:32.969 "trsvcid": "4420" 00:23:32.969 }, 00:23:32.969 "secure_channel": false, 00:23:32.969 "sock_impl": "ssl" 00:23:32.969 } 00:23:32.969 } 00:23:32.969 ] 00:23:32.969 } 00:23:32.969 ] 00:23:32.969 }' 00:23:32.969 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:33.228 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:33.229 "subsystems": [ 00:23:33.229 { 00:23:33.229 "subsystem": "keyring", 00:23:33.229 "config": [ 00:23:33.229 { 00:23:33.229 "method": "keyring_file_add_key", 00:23:33.229 "params": { 00:23:33.229 "name": "key0", 00:23:33.229 "path": "/tmp/tmp.evnaAYHRVC" 00:23:33.229 } 00:23:33.229 } 00:23:33.229 ] 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "subsystem": "iobuf", 00:23:33.229 "config": [ 00:23:33.229 { 00:23:33.229 "method": "iobuf_set_options", 00:23:33.229 "params": { 00:23:33.229 "small_pool_count": 8192, 00:23:33.229 "large_pool_count": 1024, 00:23:33.229 "small_bufsize": 8192, 00:23:33.229 "large_bufsize": 135168 00:23:33.229 } 00:23:33.229 } 00:23:33.229 ] 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "subsystem": "sock", 00:23:33.229 "config": [ 00:23:33.229 { 00:23:33.229 "method": "sock_set_default_impl", 00:23:33.229 "params": { 00:23:33.229 "impl_name": "posix" 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "sock_impl_set_options", 00:23:33.229 "params": { 00:23:33.229 "impl_name": "ssl", 00:23:33.229 "recv_buf_size": 4096, 00:23:33.229 "send_buf_size": 4096, 00:23:33.229 "enable_recv_pipe": true, 00:23:33.229 "enable_quickack": false, 00:23:33.229 "enable_placement_id": 0, 00:23:33.229 "enable_zerocopy_send_server": true, 00:23:33.229 "enable_zerocopy_send_client": false, 00:23:33.229 "zerocopy_threshold": 0, 00:23:33.229 "tls_version": 0, 00:23:33.229 "enable_ktls": false 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "sock_impl_set_options", 00:23:33.229 "params": { 00:23:33.229 "impl_name": "posix", 00:23:33.229 "recv_buf_size": 2097152, 00:23:33.229 "send_buf_size": 2097152, 00:23:33.229 "enable_recv_pipe": true, 00:23:33.229 "enable_quickack": false, 00:23:33.229 "enable_placement_id": 0, 00:23:33.229 "enable_zerocopy_send_server": true, 00:23:33.229 "enable_zerocopy_send_client": false, 00:23:33.229 "zerocopy_threshold": 0, 00:23:33.229 "tls_version": 0, 00:23:33.229 "enable_ktls": false 00:23:33.229 } 00:23:33.229 } 00:23:33.229 ] 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "subsystem": "vmd", 00:23:33.229 "config": [] 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "subsystem": "accel", 00:23:33.229 "config": [ 00:23:33.229 { 00:23:33.229 "method": "accel_set_options", 00:23:33.229 "params": { 00:23:33.229 "small_cache_size": 128, 00:23:33.229 "large_cache_size": 16, 00:23:33.229 "task_count": 2048, 00:23:33.229 "sequence_count": 2048, 00:23:33.229 "buf_count": 2048 00:23:33.229 } 00:23:33.229 } 00:23:33.229 ] 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "subsystem": "bdev", 00:23:33.229 "config": [ 00:23:33.229 { 00:23:33.229 "method": "bdev_set_options", 00:23:33.229 "params": { 00:23:33.229 "bdev_io_pool_size": 65535, 00:23:33.229 "bdev_io_cache_size": 256, 00:23:33.229 "bdev_auto_examine": true, 00:23:33.229 "iobuf_small_cache_size": 128, 00:23:33.229 "iobuf_large_cache_size": 16 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "bdev_raid_set_options", 00:23:33.229 "params": { 00:23:33.229 "process_window_size_kb": 1024, 00:23:33.229 "process_max_bandwidth_mb_sec": 0 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "bdev_iscsi_set_options", 00:23:33.229 "params": { 00:23:33.229 "timeout_sec": 30 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "bdev_nvme_set_options", 00:23:33.229 "params": { 00:23:33.229 "action_on_timeout": "none", 00:23:33.229 "timeout_us": 0, 00:23:33.229 "timeout_admin_us": 0, 00:23:33.229 "keep_alive_timeout_ms": 10000, 00:23:33.229 "arbitration_burst": 0, 00:23:33.229 "low_priority_weight": 0, 00:23:33.229 "medium_priority_weight": 0, 00:23:33.229 "high_priority_weight": 0, 00:23:33.229 "nvme_adminq_poll_period_us": 10000, 00:23:33.229 "nvme_ioq_poll_period_us": 0, 00:23:33.229 "io_queue_requests": 512, 00:23:33.229 "delay_cmd_submit": true, 00:23:33.229 "transport_retry_count": 4, 00:23:33.229 "bdev_retry_count": 3, 00:23:33.229 "transport_ack_timeout": 0, 00:23:33.229 "ctrlr_loss_timeout_sec": 0, 00:23:33.229 "reconnect_delay_sec": 0, 00:23:33.229 "fast_io_fail_timeout_sec": 0, 00:23:33.229 "disable_auto_failback": false, 00:23:33.229 "generate_uuids": false, 00:23:33.229 "transport_tos": 0, 00:23:33.229 "nvme_error_stat": false, 00:23:33.229 "rdma_srq_size": 0, 00:23:33.229 "io_path_stat": false, 00:23:33.229 "allow_accel_sequence": false, 00:23:33.229 "rdma_max_cq_size": 0, 00:23:33.229 "rdma_cm_event_timeout_ms": 0, 00:23:33.229 "dhchap_digests": [ 00:23:33.229 "sha256", 00:23:33.229 "sha384", 00:23:33.229 "sha512" 00:23:33.229 ], 00:23:33.229 "dhchap_dhgroups": [ 00:23:33.229 "null", 00:23:33.229 "ffdhe2048", 00:23:33.229 "ffdhe3072", 00:23:33.229 "ffdhe4096", 00:23:33.229 "ffdhe6144", 00:23:33.229 "ffdhe8192" 00:23:33.229 ] 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "bdev_nvme_attach_controller", 00:23:33.229 "params": { 00:23:33.229 "name": "nvme0", 00:23:33.229 "trtype": "TCP", 00:23:33.229 "adrfam": "IPv4", 00:23:33.229 "traddr": "10.0.0.2", 00:23:33.229 "trsvcid": "4420", 00:23:33.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.229 "prchk_reftag": false, 00:23:33.229 "prchk_guard": false, 00:23:33.229 "ctrlr_loss_timeout_sec": 0, 00:23:33.229 "reconnect_delay_sec": 0, 00:23:33.229 "fast_io_fail_timeout_sec": 0, 00:23:33.229 "psk": "key0", 00:23:33.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.229 "hdgst": false, 00:23:33.229 "ddgst": false 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "bdev_nvme_set_hotplug", 00:23:33.229 "params": { 00:23:33.229 "period_us": 100000, 00:23:33.229 "enable": false 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "bdev_enable_histogram", 00:23:33.229 "params": { 00:23:33.229 "name": "nvme0n1", 00:23:33.229 "enable": true 00:23:33.229 } 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "method": "bdev_wait_for_examine" 00:23:33.229 } 00:23:33.229 ] 00:23:33.229 }, 00:23:33.229 { 00:23:33.229 "subsystem": "nbd", 00:23:33.229 "config": [] 00:23:33.229 } 00:23:33.229 ] 00:23:33.229 }' 00:23:33.229 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3400413 00:23:33.229 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3400413 ']' 00:23:33.229 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3400413 00:23:33.229 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:33.229 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.229 05:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3400413 00:23:33.229 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:33.229 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:33.229 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3400413' 00:23:33.229 killing process with pid 3400413 00:23:33.229 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3400413 00:23:33.229 Received shutdown signal, test time was about 1.000000 seconds 00:23:33.229 00:23:33.229 Latency(us) 00:23:33.229 [2024-12-16T04:52:07.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.229 [2024-12-16T04:52:07.085Z] =================================================================================================================== 00:23:33.229 [2024-12-16T04:52:07.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.229 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3400413 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3400272 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3400272 ']' 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3400272 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3400272 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3400272' 00:23:33.488 killing process with pid 3400272 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3400272 00:23:33.488 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3400272 00:23:33.746 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:33.746 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:33.746 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.746 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:33.746 "subsystems": [ 00:23:33.746 { 00:23:33.746 "subsystem": "keyring", 00:23:33.746 "config": [ 00:23:33.746 { 00:23:33.746 "method": "keyring_file_add_key", 00:23:33.746 "params": { 00:23:33.746 "name": "key0", 00:23:33.746 "path": "/tmp/tmp.evnaAYHRVC" 00:23:33.746 } 00:23:33.746 } 00:23:33.746 ] 00:23:33.746 }, 00:23:33.746 { 00:23:33.746 "subsystem": "iobuf", 00:23:33.746 "config": [ 00:23:33.746 { 00:23:33.746 "method": "iobuf_set_options", 00:23:33.746 "params": { 00:23:33.746 "small_pool_count": 8192, 00:23:33.746 "large_pool_count": 1024, 00:23:33.746 "small_bufsize": 8192, 00:23:33.746 "large_bufsize": 135168 00:23:33.746 } 00:23:33.746 } 00:23:33.746 ] 00:23:33.746 }, 00:23:33.746 { 00:23:33.746 "subsystem": "sock", 00:23:33.746 "config": [ 00:23:33.746 { 00:23:33.746 "method": "sock_set_default_impl", 00:23:33.746 "params": { 00:23:33.746 "impl_name": "posix" 00:23:33.746 } 00:23:33.746 }, 00:23:33.746 { 00:23:33.746 "method": "sock_impl_set_options", 00:23:33.746 "params": { 00:23:33.746 "impl_name": "ssl", 00:23:33.746 "recv_buf_size": 4096, 00:23:33.746 "send_buf_size": 4096, 00:23:33.746 "enable_recv_pipe": true, 00:23:33.746 "enable_quickack": false, 00:23:33.746 "enable_placement_id": 0, 00:23:33.746 "enable_zerocopy_send_server": true, 00:23:33.746 "enable_zerocopy_send_client": false, 00:23:33.746 "zerocopy_threshold": 0, 00:23:33.746 "tls_version": 0, 00:23:33.746 "enable_ktls": false 00:23:33.746 } 00:23:33.746 }, 00:23:33.746 { 00:23:33.746 "method": "sock_impl_set_options", 00:23:33.746 "params": { 00:23:33.746 "impl_name": "posix", 00:23:33.746 "recv_buf_size": 2097152, 00:23:33.746 "send_buf_size": 2097152, 00:23:33.746 "enable_recv_pipe": true, 00:23:33.746 "enable_quickack": false, 00:23:33.746 "enable_placement_id": 0, 00:23:33.746 "enable_zerocopy_send_server": true, 00:23:33.746 "enable_zerocopy_send_client": false, 00:23:33.746 "zerocopy_threshold": 0, 00:23:33.746 "tls_version": 0, 00:23:33.746 "enable_ktls": false 00:23:33.746 } 00:23:33.746 } 00:23:33.746 ] 00:23:33.746 }, 00:23:33.746 { 00:23:33.746 "subsystem": "vmd", 00:23:33.746 "config": [] 00:23:33.746 }, 00:23:33.746 { 00:23:33.747 "subsystem": "accel", 00:23:33.747 "config": [ 00:23:33.747 { 00:23:33.747 "method": "accel_set_options", 00:23:33.747 "params": { 00:23:33.747 "small_cache_size": 128, 00:23:33.747 "large_cache_size": 16, 00:23:33.747 "task_count": 2048, 00:23:33.747 "sequence_count": 2048, 00:23:33.747 "buf_count": 2048 00:23:33.747 } 00:23:33.747 } 00:23:33.747 ] 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "subsystem": "bdev", 00:23:33.747 "config": [ 00:23:33.747 { 00:23:33.747 "method": "bdev_set_options", 00:23:33.747 "params": { 00:23:33.747 "bdev_io_pool_size": 65535, 00:23:33.747 "bdev_io_cache_size": 256, 00:23:33.747 "bdev_auto_examine": true, 00:23:33.747 "iobuf_small_cache_size": 128, 00:23:33.747 "iobuf_large_cache_size": 16 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "bdev_raid_set_options", 00:23:33.747 "params": { 00:23:33.747 "process_window_size_kb": 1024, 00:23:33.747 "process_max_bandwidth_mb_sec": 0 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "bdev_iscsi_set_options", 00:23:33.747 "params": { 00:23:33.747 "timeout_sec": 30 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "bdev_nvme_set_options", 00:23:33.747 "params": { 00:23:33.747 "action_on_timeout": "none", 00:23:33.747 "timeout_us": 0, 00:23:33.747 "timeout_admin_us": 0, 00:23:33.747 "keep_alive_timeout_ms": 10000, 00:23:33.747 "arbitration_burst": 0, 00:23:33.747 "low_priority_weight": 0, 00:23:33.747 "medium_priority_weight": 0, 00:23:33.747 "high_priority_weight": 0, 00:23:33.747 "nvme_adminq_poll_period_us": 10000, 00:23:33.747 "nvme_ioq_poll_period_us": 0, 00:23:33.747 "io_queue_requests": 0, 00:23:33.747 "delay_cmd_submit": true, 00:23:33.747 "transport_retry_count": 4, 00:23:33.747 "bdev_retry_count": 3, 00:23:33.747 "transport_ack_timeout": 0, 00:23:33.747 "ctrlr_loss_timeout_sec": 0, 00:23:33.747 "reconnect_delay_sec": 0, 00:23:33.747 "fast_io_fail_timeout_sec": 0, 00:23:33.747 "disable_auto_failback": false, 00:23:33.747 "generate_uuids": false, 00:23:33.747 "transport_tos": 0, 00:23:33.747 "nvme_error_stat": false, 00:23:33.747 "rdma_srq_size": 0, 00:23:33.747 "io_path_stat": false, 00:23:33.747 "allow_accel_sequence": false, 00:23:33.747 "rdma_max_cq_size": 0, 00:23:33.747 "rdma_cm_event_timeout_ms": 0, 00:23:33.747 "dhchap_digests": [ 00:23:33.747 "sha256", 00:23:33.747 "sha384", 00:23:33.747 "sha512" 00:23:33.747 ], 00:23:33.747 "dhchap_dhgroups": [ 00:23:33.747 "null", 00:23:33.747 "ffdhe2048", 00:23:33.747 "ffdhe3072", 00:23:33.747 "ffdhe4096", 00:23:33.747 "ffdhe6144", 00:23:33.747 "ffdhe8192" 00:23:33.747 ] 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "bdev_nvme_set_hotplug", 00:23:33.747 "params": { 00:23:33.747 "period_us": 100000, 00:23:33.747 "enable": false 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "bdev_malloc_create", 00:23:33.747 "params": { 00:23:33.747 "name": "malloc0", 00:23:33.747 "num_blocks": 8192, 00:23:33.747 "block_size": 4096, 00:23:33.747 "physical_block_size": 4096, 00:23:33.747 "uuid": "cbb1c756-01b2-4dcf-8600-85625a5f23e4", 00:23:33.747 "optimal_io_boundary": 0, 00:23:33.747 "md_size": 0, 00:23:33.747 "dif_type": 0, 00:23:33.747 "dif_is_head_of_md": false, 00:23:33.747 "dif_pi_format": 0 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "bdev_wait_for_examine" 00:23:33.747 } 00:23:33.747 ] 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "subsystem": "nbd", 00:23:33.747 "config": [] 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "subsystem": "scheduler", 00:23:33.747 "config": [ 00:23:33.747 { 00:23:33.747 "method": "framework_set_scheduler", 00:23:33.747 "params": { 00:23:33.747 "name": "static" 00:23:33.747 } 00:23:33.747 } 00:23:33.747 ] 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "subsystem": "nvmf", 00:23:33.747 "config": [ 00:23:33.747 { 00:23:33.747 "method": "nvmf_set_config", 00:23:33.747 "params": { 00:23:33.747 "discovery_filter": "match_any", 00:23:33.747 "admin_cmd_passthru": { 00:23:33.747 "identify_ctrlr": false 00:23:33.747 }, 00:23:33.747 "dhchap_digests": [ 00:23:33.747 "sha256", 00:23:33.747 "sha384", 00:23:33.747 "sha512" 00:23:33.747 ], 00:23:33.747 "dhchap_dhgroups": [ 00:23:33.747 "null", 00:23:33.747 "ffdhe2048", 00:23:33.747 "ffdhe3072", 00:23:33.747 "ffdhe4096", 00:23:33.747 "ffdhe6144", 00:23:33.747 "ffdhe8192" 00:23:33.747 ] 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "nvmf_set_max_subsystems", 00:23:33.747 "params": { 00:23:33.747 "max_subsystems": 1024 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "nvmf_set_crdt", 00:23:33.747 "params": { 00:23:33.747 "crdt1": 0, 00:23:33.747 "crdt2": 0, 00:23:33.747 "crdt3": 0 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "nvmf_create_transport", 00:23:33.747 "params": { 00:23:33.747 "trtype": "TCP", 00:23:33.747 "max_queue_depth": 128, 00:23:33.747 "max_io_qpairs_per_ctrlr": 127, 00:23:33.747 "in_capsule_data_size": 4096, 00:23:33.747 "max_io_size": 131072, 00:23:33.747 "io_unit_size": 131072, 00:23:33.747 "max_aq_depth": 128, 00:23:33.747 "num_shared_buffers": 511, 00:23:33.747 "buf_cache_size": 4294967295, 00:23:33.747 "dif_insert_or_strip": false, 00:23:33.747 "zcopy": false, 00:23:33.747 "c2h_success": false, 00:23:33.747 "sock_priority": 0, 00:23:33.747 "abort_timeout_sec": 1, 00:23:33.747 "ack_timeout": 0, 00:23:33.747 "data_wr_pool_size": 0 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "nvmf_create_subsystem", 00:23:33.747 "params": { 00:23:33.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.747 "allow_any_host": false, 00:23:33.747 "serial_number": "00000000000000000000", 00:23:33.747 "model_number": "SPDK bdev Controller", 00:23:33.747 "max_namespaces": 32, 00:23:33.747 "min_cntlid": 1, 00:23:33.747 "max_cntlid": 65519, 00:23:33.747 "ana_reporting": false 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "nvmf_subsystem_add_host", 00:23:33.747 "params": { 00:23:33.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.747 "host": "nqn.2016-06.io.spdk:host1", 00:23:33.747 "psk": "key0" 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "nvmf_subsystem_add_ns", 00:23:33.747 "params": { 00:23:33.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.747 "namespace": { 00:23:33.747 "nsid": 1, 00:23:33.747 "bdev_name": "malloc0", 00:23:33.747 "nguid": "CBB1C75601B24DCF860085625A5F23E4", 00:23:33.747 "uuid": "cbb1c756-01b2-4dcf-8600-85625a5f23e4", 00:23:33.747 "no_auto_visible": false 00:23:33.747 } 00:23:33.747 } 00:23:33.747 }, 00:23:33.747 { 00:23:33.747 "method": "nvmf_subsystem_add_listener", 00:23:33.747 "params": { 00:23:33.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.747 "listen_address": { 00:23:33.747 "trtype": "TCP", 00:23:33.747 "adrfam": "IPv4", 00:23:33.747 "traddr": "10.0.0.2", 00:23:33.747 "trsvcid": "4420" 00:23:33.747 }, 00:23:33.747 "secure_channel": false, 00:23:33.747 "sock_impl": "ssl" 00:23:33.747 } 00:23:33.747 } 00:23:33.747 ] 00:23:33.747 } 00:23:33.747 ] 00:23:33.747 }' 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3400772 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3400772 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3400772 ']' 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.747 05:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.747 [2024-12-16 05:52:07.481361] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:33.747 [2024-12-16 05:52:07.481404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.747 [2024-12-16 05:52:07.540329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.747 [2024-12-16 05:52:07.579319] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.747 [2024-12-16 05:52:07.579356] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.747 [2024-12-16 05:52:07.579363] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.747 [2024-12-16 05:52:07.579369] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.747 [2024-12-16 05:52:07.579374] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.747 [2024-12-16 05:52:07.579422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.004 [2024-12-16 05:52:07.802127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.004 [2024-12-16 05:52:07.834143] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.004 [2024-12-16 05:52:07.834326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3400997 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3400997 /var/tmp/bdevperf.sock 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3400997 ']' 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.568 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:34.568 "subsystems": [ 00:23:34.568 { 00:23:34.568 "subsystem": "keyring", 00:23:34.568 "config": [ 00:23:34.568 { 00:23:34.568 "method": "keyring_file_add_key", 00:23:34.568 "params": { 00:23:34.568 "name": "key0", 00:23:34.568 "path": "/tmp/tmp.evnaAYHRVC" 00:23:34.568 } 00:23:34.568 } 00:23:34.568 ] 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "subsystem": "iobuf", 00:23:34.568 "config": [ 00:23:34.568 { 00:23:34.568 "method": "iobuf_set_options", 00:23:34.568 "params": { 00:23:34.568 "small_pool_count": 8192, 00:23:34.568 "large_pool_count": 1024, 00:23:34.568 "small_bufsize": 8192, 00:23:34.568 "large_bufsize": 135168 00:23:34.568 } 00:23:34.568 } 00:23:34.568 ] 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "subsystem": "sock", 00:23:34.568 "config": [ 00:23:34.568 { 00:23:34.568 "method": "sock_set_default_impl", 00:23:34.568 "params": { 00:23:34.568 "impl_name": "posix" 00:23:34.568 } 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "method": "sock_impl_set_options", 00:23:34.568 "params": { 00:23:34.568 "impl_name": "ssl", 00:23:34.568 "recv_buf_size": 4096, 00:23:34.568 "send_buf_size": 4096, 00:23:34.568 "enable_recv_pipe": true, 00:23:34.568 "enable_quickack": false, 00:23:34.568 "enable_placement_id": 0, 00:23:34.568 "enable_zerocopy_send_server": true, 00:23:34.568 "enable_zerocopy_send_client": false, 00:23:34.568 "zerocopy_threshold": 0, 00:23:34.568 "tls_version": 0, 00:23:34.568 "enable_ktls": false 00:23:34.568 } 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "method": "sock_impl_set_options", 00:23:34.568 "params": { 00:23:34.568 "impl_name": "posix", 00:23:34.568 "recv_buf_size": 2097152, 00:23:34.568 "send_buf_size": 2097152, 00:23:34.568 "enable_recv_pipe": true, 00:23:34.568 "enable_quickack": false, 00:23:34.568 "enable_placement_id": 0, 00:23:34.568 "enable_zerocopy_send_server": true, 00:23:34.568 "enable_zerocopy_send_client": false, 00:23:34.568 "zerocopy_threshold": 0, 00:23:34.568 "tls_version": 0, 00:23:34.568 "enable_ktls": false 00:23:34.568 } 00:23:34.568 } 00:23:34.568 ] 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "subsystem": "vmd", 00:23:34.568 "config": [] 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "subsystem": "accel", 00:23:34.568 "config": [ 00:23:34.568 { 00:23:34.568 "method": "accel_set_options", 00:23:34.568 "params": { 00:23:34.568 "small_cache_size": 128, 00:23:34.568 "large_cache_size": 16, 00:23:34.568 "task_count": 2048, 00:23:34.568 "sequence_count": 2048, 00:23:34.568 "buf_count": 2048 00:23:34.568 } 00:23:34.568 } 00:23:34.568 ] 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "subsystem": "bdev", 00:23:34.568 "config": [ 00:23:34.568 { 00:23:34.568 "method": "bdev_set_options", 00:23:34.568 "params": { 00:23:34.568 "bdev_io_pool_size": 65535, 00:23:34.568 "bdev_io_cache_size": 256, 00:23:34.568 "bdev_auto_examine": true, 00:23:34.568 "iobuf_small_cache_size": 128, 00:23:34.568 "iobuf_large_cache_size": 16 00:23:34.568 } 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "method": "bdev_raid_set_options", 00:23:34.568 "params": { 00:23:34.568 "process_window_size_kb": 1024, 00:23:34.568 "process_max_bandwidth_mb_sec": 0 00:23:34.568 } 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "method": "bdev_iscsi_set_options", 00:23:34.568 "params": { 00:23:34.568 "timeout_sec": 30 00:23:34.568 } 00:23:34.568 }, 00:23:34.568 { 00:23:34.568 "method": "bdev_nvme_set_options", 00:23:34.568 "params": { 00:23:34.568 "action_on_timeout": "none", 00:23:34.568 "timeout_us": 0, 00:23:34.568 "timeout_admin_us": 0, 00:23:34.568 "keep_alive_timeout_ms": 10000, 00:23:34.568 "arbitration_burst": 0, 00:23:34.568 "low_priority_weight": 0, 00:23:34.568 "medium_priority_weight": 0, 00:23:34.568 "high_priority_weight": 0, 00:23:34.568 "nvme_adminq_poll_period_us": 10000, 00:23:34.568 "nvme_ioq_poll_period_us": 0, 00:23:34.568 "io_queue_requests": 512, 00:23:34.568 "delay_cmd_submit": true, 00:23:34.568 "transport_retry_count": 4, 00:23:34.568 "bdev_retry_count": 3, 00:23:34.568 "transport_ack_timeout": 0, 00:23:34.568 "ctrlr_loss_timeout_sec": 0, 00:23:34.568 "reconnect_delay_sec": 0, 00:23:34.568 "fast_io_fail_timeout_sec": 0, 00:23:34.568 "disable_auto_failback": false, 00:23:34.568 "generate_uuids": false, 00:23:34.568 "transport_tos": 0, 00:23:34.568 "nvme_error_stat": false, 00:23:34.568 "rdma_srq_size": 0, 00:23:34.568 "io_path_stat": false, 00:23:34.568 "allow_accel_sequence": false, 00:23:34.568 "rdma_max_cq_size": 0, 00:23:34.568 "rdma_cm_event_timeout_ms": 0, 00:23:34.568 "dhchap_digests": [ 00:23:34.568 "sha256", 00:23:34.568 "sha384", 00:23:34.568 "sha512" 00:23:34.568 ], 00:23:34.568 "dhchap_dhgroups": [ 00:23:34.568 "null", 00:23:34.568 "ffdhe2048", 00:23:34.568 "ffdhe3072", 00:23:34.568 "ffdhe4096", 00:23:34.568 "ffdhe6144", 00:23:34.568 "ffdhe8192" 00:23:34.569 ] 00:23:34.569 } 00:23:34.569 }, 00:23:34.569 { 00:23:34.569 "method": "bdev_nvme_attach_controller", 00:23:34.569 "params": { 00:23:34.569 "name": "nvme0", 00:23:34.569 "trtype": "TCP", 00:23:34.569 "adrfam": "IPv4", 00:23:34.569 "traddr": "10.0.0.2", 00:23:34.569 "trsvcid": "4420", 00:23:34.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.569 "prchk_reftag": false, 00:23:34.569 "prchk_guard": false, 00:23:34.569 "ctrlr_loss_timeout_sec": 0, 00:23:34.569 "reconnect_delay_sec": 0, 00:23:34.569 "fast_io_fail_timeout_sec": 0, 00:23:34.569 "psk": "key0", 00:23:34.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.569 "hdgst": false, 00:23:34.569 "ddgst": false 00:23:34.569 } 00:23:34.569 }, 00:23:34.569 { 00:23:34.569 "method": "bdev_nvme_set_hotplug", 00:23:34.569 "params": { 00:23:34.569 "period_us": 100000, 00:23:34.569 "enable": false 00:23:34.569 } 00:23:34.569 }, 00:23:34.569 { 00:23:34.569 "method": "bdev_enable_histogram", 00:23:34.569 "params": { 00:23:34.569 "name": "nvme0n1", 00:23:34.569 "enable": true 00:23:34.569 } 00:23:34.569 }, 00:23:34.569 { 00:23:34.569 "method": "bdev_wait_for_examine" 00:23:34.569 } 00:23:34.569 ] 00:23:34.569 }, 00:23:34.569 { 00:23:34.569 "subsystem": "nbd", 00:23:34.569 "config": [] 00:23:34.569 } 00:23:34.569 ] 00:23:34.569 }' 00:23:34.569 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.569 05:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.569 [2024-12-16 05:52:08.384046] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:34.569 [2024-12-16 05:52:08.384094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400997 ] 00:23:34.826 [2024-12-16 05:52:08.439469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.826 [2024-12-16 05:52:08.477868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.827 [2024-12-16 05:52:08.624327] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.392 05:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.392 05:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:35.392 05:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.392 05:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:35.650 05:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.650 05:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.650 Running I/O for 1 seconds... 00:23:37.024 5475.00 IOPS, 21.39 MiB/s 00:23:37.024 Latency(us) 00:23:37.024 [2024-12-16T04:52:10.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.024 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:37.024 Verification LBA range: start 0x0 length 0x2000 00:23:37.024 nvme0n1 : 1.02 5502.17 21.49 0.00 0.00 23090.24 4743.56 22469.49 00:23:37.024 [2024-12-16T04:52:10.880Z] =================================================================================================================== 00:23:37.024 [2024-12-16T04:52:10.880Z] Total : 5502.17 21.49 0.00 0.00 23090.24 4743.56 22469.49 00:23:37.024 { 00:23:37.024 "results": [ 00:23:37.024 { 00:23:37.024 "job": "nvme0n1", 00:23:37.024 "core_mask": "0x2", 00:23:37.024 "workload": "verify", 00:23:37.024 "status": "finished", 00:23:37.024 "verify_range": { 00:23:37.024 "start": 0, 00:23:37.024 "length": 8192 00:23:37.024 }, 00:23:37.024 "queue_depth": 128, 00:23:37.024 "io_size": 4096, 00:23:37.024 "runtime": 1.018508, 00:23:37.024 "iops": 5502.165913277068, 00:23:37.024 "mibps": 21.492835598738548, 00:23:37.024 "io_failed": 0, 00:23:37.024 "io_timeout": 0, 00:23:37.024 "avg_latency_us": 23090.238335882532, 00:23:37.024 "min_latency_us": 4743.558095238095, 00:23:37.024 "max_latency_us": 22469.485714285714 00:23:37.024 } 00:23:37.024 ], 00:23:37.024 "core_count": 1 00:23:37.024 } 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:37.024 nvmf_trace.0 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3400997 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3400997 ']' 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3400997 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3400997 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3400997' 00:23:37.024 killing process with pid 3400997 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3400997 00:23:37.024 Received shutdown signal, test time was about 1.000000 seconds 00:23:37.024 00:23:37.024 Latency(us) 00:23:37.024 [2024-12-16T04:52:10.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.024 [2024-12-16T04:52:10.880Z] =================================================================================================================== 00:23:37.024 [2024-12-16T04:52:10.880Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3400997 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.024 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.024 rmmod nvme_tcp 00:23:37.024 rmmod nvme_fabrics 00:23:37.024 rmmod nvme_keyring 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 3400772 ']' 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 3400772 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3400772 ']' 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3400772 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3400772 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3400772' 00:23:37.283 killing process with pid 3400772 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3400772 00:23:37.283 05:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3400772 00:23:37.283 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:37.283 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:37.283 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:37.283 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:37.283 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:23:37.283 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:37.283 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:23:37.542 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.542 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:37.542 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.542 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.542 05:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.446 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.446 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uejUo5GShp /tmp/tmp.nM9TdJ9sVJ /tmp/tmp.evnaAYHRVC 00:23:39.446 00:23:39.446 real 1m18.301s 00:23:39.446 user 1m59.647s 00:23:39.446 sys 0m30.120s 00:23:39.446 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.446 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.447 ************************************ 00:23:39.447 END TEST nvmf_tls 00:23:39.447 ************************************ 00:23:39.447 05:52:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:39.447 05:52:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.447 05:52:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.447 05:52:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:39.447 ************************************ 00:23:39.447 START TEST nvmf_fips 00:23:39.447 ************************************ 00:23:39.447 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:39.706 * Looking for test storage... 00:23:39.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.706 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:39.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.706 --rc genhtml_branch_coverage=1 00:23:39.706 --rc genhtml_function_coverage=1 00:23:39.706 --rc genhtml_legend=1 00:23:39.706 --rc geninfo_all_blocks=1 00:23:39.706 --rc geninfo_unexecuted_blocks=1 00:23:39.706 00:23:39.707 ' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:39.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.707 --rc genhtml_branch_coverage=1 00:23:39.707 --rc genhtml_function_coverage=1 00:23:39.707 --rc genhtml_legend=1 00:23:39.707 --rc geninfo_all_blocks=1 00:23:39.707 --rc geninfo_unexecuted_blocks=1 00:23:39.707 00:23:39.707 ' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:39.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.707 --rc genhtml_branch_coverage=1 00:23:39.707 --rc genhtml_function_coverage=1 00:23:39.707 --rc genhtml_legend=1 00:23:39.707 --rc geninfo_all_blocks=1 00:23:39.707 --rc geninfo_unexecuted_blocks=1 00:23:39.707 00:23:39.707 ' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:39.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.707 --rc genhtml_branch_coverage=1 00:23:39.707 --rc genhtml_function_coverage=1 00:23:39.707 --rc genhtml_legend=1 00:23:39.707 --rc geninfo_all_blocks=1 00:23:39.707 --rc geninfo_unexecuted_blocks=1 00:23:39.707 00:23:39.707 ' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:39.707 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:39.708 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:39.967 Error setting digest 00:23:39.967 40024AC5F07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:39.967 40024AC5F07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:39.967 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:39.968 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.968 05:52:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:45.240 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:45.240 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:45.241 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:45.241 Found net devices under 0000:af:00.0: cvl_0_0 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:45.241 Found net devices under 0000:af:00.1: cvl_0_1 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.241 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:23:45.500 00:23:45.500 --- 10.0.0.2 ping statistics --- 00:23:45.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.500 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:23:45.500 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:23:45.759 00:23:45.759 --- 10.0.0.1 ping statistics --- 00:23:45.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.759 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=3404943 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 3404943 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3404943 ']' 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.759 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:45.759 [2024-12-16 05:52:19.478828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:45.759 [2024-12-16 05:52:19.478888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.759 [2024-12-16 05:52:19.536718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.759 [2024-12-16 05:52:19.574037] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.759 [2024-12-16 05:52:19.574077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.759 [2024-12-16 05:52:19.574086] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.759 [2024-12-16 05:52:19.574093] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.759 [2024-12-16 05:52:19.574098] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.759 [2024-12-16 05:52:19.574138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.017 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.017 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.dsq 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.dsq 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.dsq 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.dsq 00:23:46.018 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:46.276 [2024-12-16 05:52:19.875257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.277 [2024-12-16 05:52:19.891257] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.277 [2024-12-16 05:52:19.891473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.277 malloc0 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3404983 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3404983 /var/tmp/bdevperf.sock 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3404983 ']' 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.277 05:52:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:46.277 [2024-12-16 05:52:20.026303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:46.277 [2024-12-16 05:52:20.026357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404983 ] 00:23:46.277 [2024-12-16 05:52:20.080996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.277 [2024-12-16 05:52:20.120905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.534 05:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.534 05:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:46.534 05:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.dsq 00:23:46.792 05:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:46.792 [2024-12-16 05:52:20.571144] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.792 TLSTESTn1 00:23:47.050 05:52:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:47.050 Running I/O for 10 seconds... 00:23:48.922 5247.00 IOPS, 20.50 MiB/s [2024-12-16T04:52:24.154Z] 5356.50 IOPS, 20.92 MiB/s [2024-12-16T04:52:25.089Z] 5414.67 IOPS, 21.15 MiB/s [2024-12-16T04:52:26.025Z] 5477.75 IOPS, 21.40 MiB/s [2024-12-16T04:52:26.961Z] 5392.80 IOPS, 21.07 MiB/s [2024-12-16T04:52:27.896Z] 5216.67 IOPS, 20.38 MiB/s [2024-12-16T04:52:28.830Z] 5062.86 IOPS, 19.78 MiB/s [2024-12-16T04:52:29.766Z] 4952.75 IOPS, 19.35 MiB/s [2024-12-16T04:52:31.143Z] 4888.89 IOPS, 19.10 MiB/s [2024-12-16T04:52:31.143Z] 4837.20 IOPS, 18.90 MiB/s 00:23:57.287 Latency(us) 00:23:57.287 [2024-12-16T04:52:31.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.287 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:57.287 Verification LBA range: start 0x0 length 0x2000 00:23:57.287 TLSTESTn1 : 10.02 4841.04 18.91 0.00 0.00 26401.30 5367.71 49432.87 00:23:57.287 [2024-12-16T04:52:31.143Z] =================================================================================================================== 00:23:57.287 [2024-12-16T04:52:31.143Z] Total : 4841.04 18.91 0.00 0.00 26401.30 5367.71 49432.87 00:23:57.287 { 00:23:57.287 "results": [ 00:23:57.287 { 00:23:57.287 "job": "TLSTESTn1", 00:23:57.287 "core_mask": "0x4", 00:23:57.287 "workload": "verify", 00:23:57.287 "status": "finished", 00:23:57.287 "verify_range": { 00:23:57.287 "start": 0, 00:23:57.287 "length": 8192 00:23:57.287 }, 00:23:57.287 "queue_depth": 128, 00:23:57.287 "io_size": 4096, 00:23:57.287 "runtime": 10.01788, 00:23:57.287 "iops": 4841.04421294725, 00:23:57.287 "mibps": 18.910328956825197, 00:23:57.287 "io_failed": 0, 00:23:57.287 "io_timeout": 0, 00:23:57.287 "avg_latency_us": 26401.295974282162, 00:23:57.287 "min_latency_us": 5367.710476190477, 00:23:57.287 "max_latency_us": 49432.868571428575 00:23:57.287 } 00:23:57.287 ], 00:23:57.287 "core_count": 1 00:23:57.287 } 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:57.287 nvmf_trace.0 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3404983 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3404983 ']' 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3404983 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3404983 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3404983' 00:23:57.287 killing process with pid 3404983 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3404983 00:23:57.287 Received shutdown signal, test time was about 10.000000 seconds 00:23:57.287 00:23:57.287 Latency(us) 00:23:57.287 [2024-12-16T04:52:31.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.287 [2024-12-16T04:52:31.143Z] =================================================================================================================== 00:23:57.287 [2024-12-16T04:52:31.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.287 05:52:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3404983 00:23:57.287 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:57.287 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:57.287 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:57.287 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.287 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:57.287 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.287 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.287 rmmod nvme_tcp 00:23:57.287 rmmod nvme_fabrics 00:23:57.546 rmmod nvme_keyring 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 3404943 ']' 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 3404943 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3404943 ']' 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3404943 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3404943 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3404943' 00:23:57.546 killing process with pid 3404943 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3404943 00:23:57.546 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3404943 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.805 05:52:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.dsq 00:23:59.710 00:23:59.710 real 0m20.227s 00:23:59.710 user 0m20.638s 00:23:59.710 sys 0m9.829s 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.710 ************************************ 00:23:59.710 END TEST nvmf_fips 00:23:59.710 ************************************ 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:59.710 05:52:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:59.969 ************************************ 00:23:59.969 START TEST nvmf_control_msg_list 00:23:59.969 ************************************ 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:59.969 * Looking for test storage... 00:23:59.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:59.969 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:59.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.970 --rc genhtml_branch_coverage=1 00:23:59.970 --rc genhtml_function_coverage=1 00:23:59.970 --rc genhtml_legend=1 00:23:59.970 --rc geninfo_all_blocks=1 00:23:59.970 --rc geninfo_unexecuted_blocks=1 00:23:59.970 00:23:59.970 ' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:59.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.970 --rc genhtml_branch_coverage=1 00:23:59.970 --rc genhtml_function_coverage=1 00:23:59.970 --rc genhtml_legend=1 00:23:59.970 --rc geninfo_all_blocks=1 00:23:59.970 --rc geninfo_unexecuted_blocks=1 00:23:59.970 00:23:59.970 ' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:59.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.970 --rc genhtml_branch_coverage=1 00:23:59.970 --rc genhtml_function_coverage=1 00:23:59.970 --rc genhtml_legend=1 00:23:59.970 --rc geninfo_all_blocks=1 00:23:59.970 --rc geninfo_unexecuted_blocks=1 00:23:59.970 00:23:59.970 ' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:59.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.970 --rc genhtml_branch_coverage=1 00:23:59.970 --rc genhtml_function_coverage=1 00:23:59.970 --rc genhtml_legend=1 00:23:59.970 --rc geninfo_all_blocks=1 00:23:59.970 --rc geninfo_unexecuted_blocks=1 00:23:59.970 00:23:59.970 ' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:59.970 05:52:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.242 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:05.243 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:05.243 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:05.243 Found net devices under 0000:af:00.0: cvl_0_0 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:05.243 Found net devices under 0000:af:00.1: cvl_0_1 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.243 05:52:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:24:05.503 00:24:05.503 --- 10.0.0.2 ping statistics --- 00:24:05.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.503 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:24:05.503 00:24:05.503 --- 10.0.0.1 ping statistics --- 00:24:05.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.503 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=3410222 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 3410222 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3410222 ']' 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.503 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.762 [2024-12-16 05:52:39.379243] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:05.762 [2024-12-16 05:52:39.379297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.762 [2024-12-16 05:52:39.437202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.762 [2024-12-16 05:52:39.476398] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.762 [2024-12-16 05:52:39.476436] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.762 [2024-12-16 05:52:39.476444] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.762 [2024-12-16 05:52:39.476450] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.762 [2024-12-16 05:52:39.476455] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.762 [2024-12-16 05:52:39.476490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:05.762 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:05.763 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:05.763 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.763 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:05.763 [2024-12-16 05:52:39.606061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.763 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.763 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:05.763 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.763 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:06.022 Malloc0 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:06.022 [2024-12-16 05:52:39.670986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3410263 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3410265 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3410267 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3410263 00:24:06.022 05:52:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.022 [2024-12-16 05:52:39.745648] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:06.022 [2024-12-16 05:52:39.745842] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:06.022 [2024-12-16 05:52:39.746017] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:07.396 Initializing NVMe Controllers 00:24:07.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:07.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:07.396 Initialization complete. Launching workers. 00:24:07.396 ======================================================== 00:24:07.396 Latency(us) 00:24:07.396 Device Information : IOPS MiB/s Average min max 00:24:07.396 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 86.00 0.34 12060.02 169.08 41033.70 00:24:07.396 ======================================================== 00:24:07.396 Total : 86.00 0.34 12060.02 169.08 41033.70 00:24:07.396 00:24:07.396 Initializing NVMe Controllers 00:24:07.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:07.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:07.396 Initialization complete. Launching workers. 00:24:07.396 ======================================================== 00:24:07.396 Latency(us) 00:24:07.396 Device Information : IOPS MiB/s Average min max 00:24:07.396 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40887.57 40460.86 41029.39 00:24:07.396 ======================================================== 00:24:07.396 Total : 25.00 0.10 40887.57 40460.86 41029.39 00:24:07.396 00:24:07.396 Initializing NVMe Controllers 00:24:07.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:07.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:07.396 Initialization complete. Launching workers. 00:24:07.396 ======================================================== 00:24:07.396 Latency(us) 00:24:07.396 Device Information : IOPS MiB/s Average min max 00:24:07.396 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40954.00 40214.12 41892.09 00:24:07.396 ======================================================== 00:24:07.396 Total : 25.00 0.10 40954.00 40214.12 41892.09 00:24:07.396 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3410265 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3410267 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.396 rmmod nvme_tcp 00:24:07.396 rmmod nvme_fabrics 00:24:07.396 rmmod nvme_keyring 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 3410222 ']' 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 3410222 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3410222 ']' 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3410222 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3410222 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3410222' 00:24:07.396 killing process with pid 3410222 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3410222 00:24:07.396 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3410222 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.656 05:52:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.560 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.560 00:24:09.560 real 0m9.800s 00:24:09.560 user 0m6.892s 00:24:09.560 sys 0m4.916s 00:24:09.560 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.560 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:09.560 ************************************ 00:24:09.560 END TEST nvmf_control_msg_list 00:24:09.560 ************************************ 00:24:09.560 05:52:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:09.560 05:52:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:09.560 05:52:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.560 05:52:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:09.819 ************************************ 00:24:09.819 START TEST nvmf_wait_for_buf 00:24:09.819 ************************************ 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:09.819 * Looking for test storage... 00:24:09.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.819 --rc genhtml_branch_coverage=1 00:24:09.819 --rc genhtml_function_coverage=1 00:24:09.819 --rc genhtml_legend=1 00:24:09.819 --rc geninfo_all_blocks=1 00:24:09.819 --rc geninfo_unexecuted_blocks=1 00:24:09.819 00:24:09.819 ' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.819 --rc genhtml_branch_coverage=1 00:24:09.819 --rc genhtml_function_coverage=1 00:24:09.819 --rc genhtml_legend=1 00:24:09.819 --rc geninfo_all_blocks=1 00:24:09.819 --rc geninfo_unexecuted_blocks=1 00:24:09.819 00:24:09.819 ' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.819 --rc genhtml_branch_coverage=1 00:24:09.819 --rc genhtml_function_coverage=1 00:24:09.819 --rc genhtml_legend=1 00:24:09.819 --rc geninfo_all_blocks=1 00:24:09.819 --rc geninfo_unexecuted_blocks=1 00:24:09.819 00:24:09.819 ' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.819 --rc genhtml_branch_coverage=1 00:24:09.819 --rc genhtml_function_coverage=1 00:24:09.819 --rc genhtml_legend=1 00:24:09.819 --rc geninfo_all_blocks=1 00:24:09.819 --rc geninfo_unexecuted_blocks=1 00:24:09.819 00:24:09.819 ' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.819 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.820 05:52:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:15.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:15.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.091 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:15.092 Found net devices under 0000:af:00.0: cvl_0_0 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:15.092 Found net devices under 0000:af:00.1: cvl_0_1 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.092 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.351 05:52:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.351 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.351 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.351 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.351 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.351 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.351 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.351 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:24:15.610 00:24:15.610 --- 10.0.0.2 ping statistics --- 00:24:15.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.610 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:15.610 00:24:15.610 --- 10.0.0.1 ping statistics --- 00:24:15.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.610 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:15.610 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=3413945 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 3413945 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3413945 ']' 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.611 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.611 [2024-12-16 05:52:49.318605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:15.611 [2024-12-16 05:52:49.318658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.611 [2024-12-16 05:52:49.379671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.611 [2024-12-16 05:52:49.420026] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.611 [2024-12-16 05:52:49.420072] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.611 [2024-12-16 05:52:49.420079] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.611 [2024-12-16 05:52:49.420085] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.611 [2024-12-16 05:52:49.420089] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.611 [2024-12-16 05:52:49.420129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 Malloc0 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 [2024-12-16 05:52:49.603742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:15.870 [2024-12-16 05:52:49.627928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.870 05:52:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.870 [2024-12-16 05:52:49.693926] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:17.369 Initializing NVMe Controllers 00:24:17.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:17.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:17.369 Initialization complete. Launching workers. 00:24:17.369 ======================================================== 00:24:17.369 Latency(us) 00:24:17.369 Device Information : IOPS MiB/s Average min max 00:24:17.369 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 126.00 15.75 32983.27 23941.34 63102.00 00:24:17.369 ======================================================== 00:24:17.369 Total : 126.00 15.75 32983.27 23941.34 63102.00 00:24:17.369 00:24:17.369 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:17.369 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:17.369 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.369 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1990 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1990 -eq 0 ]] 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.628 rmmod nvme_tcp 00:24:17.628 rmmod nvme_fabrics 00:24:17.628 rmmod nvme_keyring 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 3413945 ']' 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 3413945 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3413945 ']' 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3413945 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3413945 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3413945' 00:24:17.628 killing process with pid 3413945 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3413945 00:24:17.628 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3413945 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.887 05:52:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.793 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.793 00:24:19.793 real 0m10.171s 00:24:19.793 user 0m3.810s 00:24:19.793 sys 0m4.647s 00:24:19.793 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.793 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:19.793 ************************************ 00:24:19.793 END TEST nvmf_wait_for_buf 00:24:19.793 ************************************ 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:20.053 ************************************ 00:24:20.053 START TEST nvmf_fuzz 00:24:20.053 ************************************ 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:20.053 * Looking for test storage... 00:24:20.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.053 --rc genhtml_branch_coverage=1 00:24:20.053 --rc genhtml_function_coverage=1 00:24:20.053 --rc genhtml_legend=1 00:24:20.053 --rc geninfo_all_blocks=1 00:24:20.053 --rc geninfo_unexecuted_blocks=1 00:24:20.053 00:24:20.053 ' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.053 --rc genhtml_branch_coverage=1 00:24:20.053 --rc genhtml_function_coverage=1 00:24:20.053 --rc genhtml_legend=1 00:24:20.053 --rc geninfo_all_blocks=1 00:24:20.053 --rc geninfo_unexecuted_blocks=1 00:24:20.053 00:24:20.053 ' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.053 --rc genhtml_branch_coverage=1 00:24:20.053 --rc genhtml_function_coverage=1 00:24:20.053 --rc genhtml_legend=1 00:24:20.053 --rc geninfo_all_blocks=1 00:24:20.053 --rc geninfo_unexecuted_blocks=1 00:24:20.053 00:24:20.053 ' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.053 --rc genhtml_branch_coverage=1 00:24:20.053 --rc genhtml_function_coverage=1 00:24:20.053 --rc genhtml_legend=1 00:24:20.053 --rc geninfo_all_blocks=1 00:24:20.053 --rc geninfo_unexecuted_blocks=1 00:24:20.053 00:24:20.053 ' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:20.053 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.054 05:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.333 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.333 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:25.334 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:25.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:25.334 Found net devices under 0000:af:00.0: cvl_0_0 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:25.334 Found net devices under 0000:af:00.1: cvl_0_1 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.334 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:24:25.594 00:24:25.594 --- 10.0.0.2 ping statistics --- 00:24:25.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.594 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:24:25.594 00:24:25.594 --- 10.0.0.1 ping statistics --- 00:24:25.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.594 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3417802 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3417802 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3417802 ']' 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.594 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.853 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.853 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:25.853 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:25.853 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.854 Malloc0 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:25.854 05:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:57.954 Fuzzing completed. Shutting down the fuzz application 00:24:57.954 00:24:57.954 Dumping successful admin opcodes: 00:24:57.954 8, 9, 10, 24, 00:24:57.954 Dumping successful io opcodes: 00:24:57.954 0, 9, 00:24:57.954 NS: 0x200003aeff00 I/O qp, Total commands completed: 1012368, total successful commands: 5930, random_seed: 4174898752 00:24:57.954 NS: 0x200003aeff00 admin qp, Total commands completed: 134349, total successful commands: 1085, random_seed: 3965136128 00:24:57.954 05:53:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:57.954 Fuzzing completed. Shutting down the fuzz application 00:24:57.954 00:24:57.954 Dumping successful admin opcodes: 00:24:57.954 24, 00:24:57.954 Dumping successful io opcodes: 00:24:57.954 00:24:57.954 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1210219242 00:24:57.954 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1210287084 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.954 rmmod nvme_tcp 00:24:57.954 rmmod nvme_fabrics 00:24:57.954 rmmod nvme_keyring 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 3417802 ']' 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 3417802 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3417802 ']' 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3417802 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3417802 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3417802' 00:24:57.954 killing process with pid 3417802 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3417802 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3417802 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:24:57.954 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:57.955 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:24:57.955 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.955 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.955 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.955 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.955 05:53:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:00.494 00:25:00.494 real 0m40.094s 00:25:00.494 user 0m54.040s 00:25:00.494 sys 0m15.474s 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:00.494 ************************************ 00:25:00.494 END TEST nvmf_fuzz 00:25:00.494 ************************************ 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:00.494 ************************************ 00:25:00.494 START TEST nvmf_multiconnection 00:25:00.494 ************************************ 00:25:00.494 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:00.494 * Looking for test storage... 00:25:00.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.495 05:53:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:00.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.495 --rc genhtml_branch_coverage=1 00:25:00.495 --rc genhtml_function_coverage=1 00:25:00.495 --rc genhtml_legend=1 00:25:00.495 --rc geninfo_all_blocks=1 00:25:00.495 --rc geninfo_unexecuted_blocks=1 00:25:00.495 00:25:00.495 ' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:00.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.495 --rc genhtml_branch_coverage=1 00:25:00.495 --rc genhtml_function_coverage=1 00:25:00.495 --rc genhtml_legend=1 00:25:00.495 --rc geninfo_all_blocks=1 00:25:00.495 --rc geninfo_unexecuted_blocks=1 00:25:00.495 00:25:00.495 ' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:00.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.495 --rc genhtml_branch_coverage=1 00:25:00.495 --rc genhtml_function_coverage=1 00:25:00.495 --rc genhtml_legend=1 00:25:00.495 --rc geninfo_all_blocks=1 00:25:00.495 --rc geninfo_unexecuted_blocks=1 00:25:00.495 00:25:00.495 ' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:00.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.495 --rc genhtml_branch_coverage=1 00:25:00.495 --rc genhtml_function_coverage=1 00:25:00.495 --rc genhtml_legend=1 00:25:00.495 --rc geninfo_all_blocks=1 00:25:00.495 --rc geninfo_unexecuted_blocks=1 00:25:00.495 00:25:00.495 ' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:00.495 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.496 05:53:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:05.767 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:05.767 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.767 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:05.768 Found net devices under 0000:af:00.0: cvl_0_0 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:05.768 Found net devices under 0000:af:00.1: cvl_0_1 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:25:05.768 00:25:05.768 --- 10.0.0.2 ping statistics --- 00:25:05.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.768 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:25:05.768 00:25:05.768 --- 10.0.0.1 ping statistics --- 00:25:05.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.768 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=3426221 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 3426221 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3426221 ']' 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.768 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 [2024-12-16 05:53:39.596826] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:05.768 [2024-12-16 05:53:39.596875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.027 [2024-12-16 05:53:39.656485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.027 [2024-12-16 05:53:39.698223] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.027 [2024-12-16 05:53:39.698264] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.027 [2024-12-16 05:53:39.698271] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.027 [2024-12-16 05:53:39.698279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.027 [2024-12-16 05:53:39.698284] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.027 [2024-12-16 05:53:39.698323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.027 [2024-12-16 05:53:39.698426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.027 [2024-12-16 05:53:39.698526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.027 [2024-12-16 05:53:39.698527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.027 [2024-12-16 05:53:39.843999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.027 Malloc1 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.027 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 [2024-12-16 05:53:39.899240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 Malloc2 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 Malloc3 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 Malloc4 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 Malloc5 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 Malloc6 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.287 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.288 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.288 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:06.288 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.288 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 Malloc7 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 Malloc8 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 Malloc9 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 Malloc10 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 Malloc11 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.547 05:53:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:07.922 05:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:07.922 05:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.922 05:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.922 05:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.922 05:53:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.824 05:53:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:11.206 05:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:11.206 05:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:11.206 05:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:11.206 05:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:11.206 05:53:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.108 05:53:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:14.486 05:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:14.486 05:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:14.486 05:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.486 05:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:14.486 05:53:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:16.390 05:53:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:16.391 05:53:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:16.391 05:53:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:16.391 05:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:16.391 05:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.391 05:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:16.391 05:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.391 05:53:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:17.329 05:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:17.329 05:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:17.329 05:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.329 05:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:17.329 05:53:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.863 05:53:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:20.801 05:53:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:20.801 05:53:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:20.801 05:53:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.801 05:53:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:20.801 05:53:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.712 05:53:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:24.090 05:53:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:24.090 05:53:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:24.090 05:53:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.090 05:53:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:24.090 05:53:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.625 05:53:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:27.562 05:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:27.562 05:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.562 05:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.562 05:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.562 05:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.465 05:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:30.843 05:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:30.843 05:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:30.844 05:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.844 05:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:30.844 05:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:32.745 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:32.745 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:32.745 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:33.004 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:33.004 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.004 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:33.004 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.004 05:54:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:34.381 05:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:34.381 05:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:34.381 05:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.381 05:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:34.381 05:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.285 05:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:37.663 05:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:37.663 05:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:37.663 05:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.663 05:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:37.663 05:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.729 05:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:41.104 05:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:41.104 05:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:41.104 05:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.104 05:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:41.104 05:54:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:43.639 05:54:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:43.639 05:54:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:43.639 05:54:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:43.639 05:54:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:43.639 05:54:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.639 05:54:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:43.639 05:54:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:43.639 [global] 00:25:43.639 thread=1 00:25:43.639 invalidate=1 00:25:43.639 rw=read 00:25:43.639 time_based=1 00:25:43.639 runtime=10 00:25:43.639 ioengine=libaio 00:25:43.639 direct=1 00:25:43.639 bs=262144 00:25:43.639 iodepth=64 00:25:43.639 norandommap=1 00:25:43.639 numjobs=1 00:25:43.639 00:25:43.639 [job0] 00:25:43.639 filename=/dev/nvme0n1 00:25:43.639 [job1] 00:25:43.639 filename=/dev/nvme10n1 00:25:43.639 [job2] 00:25:43.639 filename=/dev/nvme1n1 00:25:43.639 [job3] 00:25:43.639 filename=/dev/nvme2n1 00:25:43.639 [job4] 00:25:43.639 filename=/dev/nvme3n1 00:25:43.639 [job5] 00:25:43.639 filename=/dev/nvme4n1 00:25:43.639 [job6] 00:25:43.639 filename=/dev/nvme5n1 00:25:43.639 [job7] 00:25:43.639 filename=/dev/nvme6n1 00:25:43.639 [job8] 00:25:43.639 filename=/dev/nvme7n1 00:25:43.639 [job9] 00:25:43.639 filename=/dev/nvme8n1 00:25:43.639 [job10] 00:25:43.639 filename=/dev/nvme9n1 00:25:43.639 Could not set queue depth (nvme0n1) 00:25:43.639 Could not set queue depth (nvme10n1) 00:25:43.639 Could not set queue depth (nvme1n1) 00:25:43.639 Could not set queue depth (nvme2n1) 00:25:43.639 Could not set queue depth (nvme3n1) 00:25:43.639 Could not set queue depth (nvme4n1) 00:25:43.639 Could not set queue depth (nvme5n1) 00:25:43.639 Could not set queue depth (nvme6n1) 00:25:43.639 Could not set queue depth (nvme7n1) 00:25:43.639 Could not set queue depth (nvme8n1) 00:25:43.639 Could not set queue depth (nvme9n1) 00:25:43.639 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.639 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.640 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.640 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:43.640 fio-3.35 00:25:43.640 Starting 11 threads 00:25:55.850 00:25:55.850 job0: (groupid=0, jobs=1): err= 0: pid=3433260: Mon Dec 16 05:54:28 2024 00:25:55.850 read: IOPS=367, BW=91.8MiB/s (96.3MB/s)(926MiB/10088msec) 00:25:55.850 slat (usec): min=11, max=421233, avg=1671.79, stdev=13518.05 00:25:55.850 clat (usec): min=1846, max=1150.2k, avg=172369.90, stdev=202353.68 00:25:55.850 lat (msec): min=2, max=1150, avg=174.04, stdev=204.29 00:25:55.850 clat percentiles (msec): 00:25:55.850 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 15], 20.00th=[ 37], 00:25:55.850 | 30.00th=[ 50], 40.00th=[ 66], 50.00th=[ 80], 60.00th=[ 116], 00:25:55.850 | 70.00th=[ 226], 80.00th=[ 300], 90.00th=[ 405], 95.00th=[ 567], 00:25:55.850 | 99.00th=[ 1003], 99.50th=[ 1036], 99.90th=[ 1133], 99.95th=[ 1150], 00:25:55.850 | 99.99th=[ 1150] 00:25:55.850 bw ( KiB/s): min=10752, max=289792, per=11.92%, avg=93209.60, stdev=80847.97, samples=20 00:25:55.850 iops : min= 42, max= 1132, avg=364.10, stdev=315.81, samples=20 00:25:55.850 lat (msec) : 2=0.03%, 4=1.24%, 10=3.02%, 20=11.28%, 50=15.14% 00:25:55.850 lat (msec) : 100=25.86%, 250=16.19%, 500=20.03%, 750=4.10%, 1000=2.08% 00:25:55.850 lat (msec) : 2000=1.03% 00:25:55.850 cpu : usr=0.10%, sys=1.40%, ctx=1187, majf=0, minf=3722 00:25:55.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:55.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.850 issued rwts: total=3705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.850 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.850 job1: (groupid=0, jobs=1): err= 0: pid=3433261: Mon Dec 16 05:54:28 2024 00:25:55.850 read: IOPS=300, BW=75.2MiB/s (78.8MB/s)(759MiB/10093msec) 00:25:55.850 slat (usec): min=21, max=674449, avg=2353.27, stdev=20782.83 00:25:55.850 clat (msec): min=2, max=1236, avg=210.20, stdev=214.44 00:25:55.850 lat (msec): min=2, max=1236, avg=212.55, stdev=217.48 00:25:55.850 clat percentiles (msec): 00:25:55.850 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 18], 00:25:55.850 | 30.00th=[ 53], 40.00th=[ 70], 50.00th=[ 107], 60.00th=[ 207], 00:25:55.850 | 70.00th=[ 330], 80.00th=[ 409], 90.00th=[ 506], 95.00th=[ 592], 00:25:55.850 | 99.00th=[ 852], 99.50th=[ 902], 99.90th=[ 911], 99.95th=[ 1234], 00:25:55.850 | 99.99th=[ 1234] 00:25:55.850 bw ( KiB/s): min=14336, max=364544, per=9.73%, avg=76083.20, stdev=82424.45, samples=20 00:25:55.850 iops : min= 56, max= 1424, avg=297.20, stdev=321.97, samples=20 00:25:55.850 lat (msec) : 4=0.16%, 10=11.30%, 20=9.82%, 50=7.64%, 100=20.10% 00:25:55.850 lat (msec) : 250=13.28%, 500=27.28%, 750=7.35%, 1000=3.00%, 2000=0.07% 00:25:55.850 cpu : usr=0.09%, sys=1.15%, ctx=892, majf=0, minf=4097 00:25:55.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:55.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.850 issued rwts: total=3035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.850 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.850 job2: (groupid=0, jobs=1): err= 0: pid=3433264: Mon Dec 16 05:54:28 2024 00:25:55.850 read: IOPS=163, BW=40.9MiB/s (42.9MB/s)(413MiB/10080msec) 00:25:55.850 slat (usec): min=16, max=468312, avg=6053.51, stdev=29752.43 00:25:55.850 clat (msec): min=44, max=1251, avg=384.32, stdev=308.20 00:25:55.850 lat (msec): min=44, max=1325, avg=390.37, stdev=312.94 00:25:55.850 clat percentiles (msec): 00:25:55.850 | 1.00th=[ 50], 5.00th=[ 69], 10.00th=[ 80], 20.00th=[ 100], 00:25:55.850 | 30.00th=[ 124], 40.00th=[ 171], 50.00th=[ 279], 60.00th=[ 426], 00:25:55.850 | 70.00th=[ 558], 80.00th=[ 726], 90.00th=[ 860], 95.00th=[ 978], 00:25:55.850 | 99.00th=[ 1083], 99.50th=[ 1116], 99.90th=[ 1250], 99.95th=[ 1250], 00:25:55.850 | 99.99th=[ 1250] 00:25:55.850 bw ( KiB/s): min=14848, max=175104, per=5.20%, avg=40656.00, stdev=43954.02, samples=20 00:25:55.850 iops : min= 58, max= 684, avg=158.80, stdev=171.70, samples=20 00:25:55.850 lat (msec) : 50=1.15%, 100=19.20%, 250=27.44%, 500=18.59%, 750=18.41% 00:25:55.850 lat (msec) : 1000=11.08%, 2000=4.12% 00:25:55.850 cpu : usr=0.03%, sys=0.80%, ctx=218, majf=0, minf=4097 00:25:55.850 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:25:55.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.850 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.850 issued rwts: total=1651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.850 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.850 job3: (groupid=0, jobs=1): err= 0: pid=3433265: Mon Dec 16 05:54:28 2024 00:25:55.850 read: IOPS=164, BW=41.1MiB/s (43.1MB/s)(415MiB/10094msec) 00:25:55.850 slat (usec): min=11, max=424640, avg=4075.76, stdev=22991.73 00:25:55.850 clat (msec): min=2, max=1194, avg=384.90, stdev=276.15 00:25:55.850 lat (msec): min=2, max=1194, avg=388.97, stdev=278.80 00:25:55.850 clat percentiles (msec): 00:25:55.850 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 55], 20.00th=[ 105], 00:25:55.850 | 30.00th=[ 138], 40.00th=[ 251], 50.00th=[ 409], 60.00th=[ 477], 00:25:55.850 | 70.00th=[ 550], 80.00th=[ 609], 90.00th=[ 760], 95.00th=[ 869], 00:25:55.850 | 99.00th=[ 1099], 99.50th=[ 1183], 99.90th=[ 1200], 99.95th=[ 1200], 00:25:55.850 | 99.99th=[ 1200] 00:25:55.850 bw ( KiB/s): min=12800, max=114688, per=5.22%, avg=40860.90, stdev=27510.28, samples=20 00:25:55.850 iops : min= 50, max= 448, avg=159.60, stdev=107.47, samples=20 00:25:55.850 lat (msec) : 4=1.08%, 10=1.08%, 20=2.65%, 50=3.74%, 100=10.37% 00:25:55.850 lat (msec) : 250=20.86%, 500=21.34%, 750=28.51%, 1000=7.96%, 2000=2.41% 00:25:55.850 cpu : usr=0.10%, sys=0.66%, ctx=365, majf=0, minf=4097 00:25:55.850 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:25:55.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.850 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.850 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.850 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.850 job4: (groupid=0, jobs=1): err= 0: pid=3433267: Mon Dec 16 05:54:28 2024 00:25:55.850 read: IOPS=251, BW=62.9MiB/s (66.0MB/s)(636MiB/10108msec) 00:25:55.850 slat (usec): min=17, max=409721, avg=2863.45, stdev=13560.87 00:25:55.850 clat (usec): min=775, max=1117.4k, avg=251059.89, stdev=180758.68 00:25:55.850 lat (usec): min=804, max=1117.5k, avg=253923.33, stdev=182178.84 00:25:55.850 clat percentiles (msec): 00:25:55.850 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 41], 20.00th=[ 94], 00:25:55.850 | 30.00th=[ 161], 40.00th=[ 220], 50.00th=[ 243], 60.00th=[ 268], 00:25:55.850 | 70.00th=[ 288], 80.00th=[ 326], 90.00th=[ 464], 95.00th=[ 617], 00:25:55.850 | 99.00th=[ 1003], 99.50th=[ 1003], 99.90th=[ 1116], 99.95th=[ 1116], 00:25:55.850 | 99.99th=[ 1116] 00:25:55.851 bw ( KiB/s): min=26112, max=124928, per=8.12%, avg=63513.60, stdev=28447.59, samples=20 00:25:55.851 iops : min= 102, max= 488, avg=248.10, stdev=111.12, samples=20 00:25:55.851 lat (usec) : 1000=0.31% 00:25:55.851 lat (msec) : 2=0.35%, 10=2.16%, 20=4.05%, 50=3.77%, 100=10.61% 00:25:55.851 lat (msec) : 250=31.20%, 500=38.94%, 750=6.09%, 1000=1.53%, 2000=0.98% 00:25:55.851 cpu : usr=0.13%, sys=1.01%, ctx=720, majf=0, minf=4098 00:25:55.851 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:55.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.851 issued rwts: total=2545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.851 job5: (groupid=0, jobs=1): err= 0: pid=3433268: Mon Dec 16 05:54:28 2024 00:25:55.851 read: IOPS=188, BW=47.1MiB/s (49.4MB/s)(477MiB/10123msec) 00:25:55.851 slat (usec): min=10, max=356743, avg=3532.90, stdev=20439.82 00:25:55.851 clat (usec): min=1596, max=1222.8k, avg=335838.92, stdev=272157.72 00:25:55.851 lat (usec): min=1641, max=1222.9k, avg=339371.82, stdev=274914.51 00:25:55.851 clat percentiles (msec): 00:25:55.851 | 1.00th=[ 21], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 73], 00:25:55.851 | 30.00th=[ 125], 40.00th=[ 176], 50.00th=[ 279], 60.00th=[ 351], 00:25:55.851 | 70.00th=[ 468], 80.00th=[ 567], 90.00th=[ 718], 95.00th=[ 894], 00:25:55.851 | 99.00th=[ 1070], 99.50th=[ 1150], 99.90th=[ 1183], 99.95th=[ 1217], 00:25:55.851 | 99.99th=[ 1217] 00:25:55.851 bw ( KiB/s): min=14848, max=163328, per=6.03%, avg=47180.80, stdev=38647.40, samples=20 00:25:55.851 iops : min= 58, max= 638, avg=184.30, stdev=150.97, samples=20 00:25:55.851 lat (msec) : 2=0.05%, 4=0.16%, 10=0.16%, 20=0.58%, 50=9.91% 00:25:55.851 lat (msec) : 100=13.37%, 250=22.23%, 500=27.58%, 750=18.09%, 1000=5.51% 00:25:55.851 lat (msec) : 2000=2.36% 00:25:55.851 cpu : usr=0.09%, sys=0.77%, ctx=435, majf=0, minf=4097 00:25:55.851 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:25:55.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.851 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.851 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.851 job6: (groupid=0, jobs=1): err= 0: pid=3433269: Mon Dec 16 05:54:28 2024 00:25:55.851 read: IOPS=454, BW=114MiB/s (119MB/s)(1150MiB/10121msec) 00:25:55.851 slat (usec): min=11, max=651477, avg=1312.46, stdev=15561.10 00:25:55.851 clat (usec): min=1073, max=1644.6k, avg=139353.39, stdev=240420.71 00:25:55.851 lat (usec): min=1126, max=1644.6k, avg=140665.85, stdev=241764.46 00:25:55.851 clat percentiles (msec): 00:25:55.851 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 13], 20.00th=[ 23], 00:25:55.851 | 30.00th=[ 33], 40.00th=[ 46], 50.00th=[ 56], 60.00th=[ 64], 00:25:55.851 | 70.00th=[ 87], 80.00th=[ 150], 90.00th=[ 418], 95.00th=[ 667], 00:25:55.851 | 99.00th=[ 1267], 99.50th=[ 1636], 99.90th=[ 1636], 99.95th=[ 1653], 00:25:55.851 | 99.99th=[ 1653] 00:25:55.851 bw ( KiB/s): min=13312, max=288256, per=14.85%, avg=116126.60, stdev=94788.29, samples=20 00:25:55.851 iops : min= 52, max= 1126, avg=453.60, stdev=370.28, samples=20 00:25:55.851 lat (msec) : 2=0.17%, 4=2.39%, 10=6.09%, 20=8.93%, 50=25.24% 00:25:55.851 lat (msec) : 100=29.80%, 250=12.07%, 500=7.48%, 750=3.89%, 1000=2.59% 00:25:55.851 lat (msec) : 2000=1.35% 00:25:55.851 cpu : usr=0.17%, sys=1.71%, ctx=1684, majf=0, minf=4097 00:25:55.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:55.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.851 issued rwts: total=4600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.851 job7: (groupid=0, jobs=1): err= 0: pid=3433274: Mon Dec 16 05:54:28 2024 00:25:55.851 read: IOPS=193, BW=48.4MiB/s (50.7MB/s)(488MiB/10083msec) 00:25:55.851 slat (usec): min=11, max=261060, avg=5119.73, stdev=21205.09 00:25:55.851 clat (msec): min=38, max=1012, avg=325.30, stdev=249.25 00:25:55.851 lat (msec): min=44, max=1100, avg=330.42, stdev=252.97 00:25:55.851 clat percentiles (msec): 00:25:55.851 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 73], 00:25:55.851 | 30.00th=[ 95], 40.00th=[ 148], 50.00th=[ 268], 60.00th=[ 388], 00:25:55.851 | 70.00th=[ 498], 80.00th=[ 592], 90.00th=[ 659], 95.00th=[ 751], 00:25:55.851 | 99.00th=[ 911], 99.50th=[ 995], 99.90th=[ 1011], 99.95th=[ 1011], 00:25:55.851 | 99.99th=[ 1011] 00:25:55.851 bw ( KiB/s): min=11264, max=218624, per=6.18%, avg=48307.20, stdev=52785.63, samples=20 00:25:55.851 iops : min= 44, max= 854, avg=188.70, stdev=206.19, samples=20 00:25:55.851 lat (msec) : 50=3.23%, 100=28.40%, 250=16.97%, 500=21.58%, 750=25.06% 00:25:55.851 lat (msec) : 1000=4.66%, 2000=0.10% 00:25:55.851 cpu : usr=0.09%, sys=0.85%, ctx=254, majf=0, minf=4097 00:25:55.851 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:25:55.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.851 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.851 issued rwts: total=1951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.851 job8: (groupid=0, jobs=1): err= 0: pid=3433275: Mon Dec 16 05:54:28 2024 00:25:55.851 read: IOPS=424, BW=106MiB/s (111MB/s)(1075MiB/10119msec) 00:25:55.851 slat (usec): min=7, max=567486, avg=1100.24, stdev=14450.05 00:25:55.851 clat (usec): min=609, max=1422.0k, avg=149362.66, stdev=228497.21 00:25:55.851 lat (usec): min=639, max=1422.0k, avg=150462.90, stdev=229741.01 00:25:55.851 clat percentiles (msec): 00:25:55.851 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 13], 20.00th=[ 28], 00:25:55.851 | 30.00th=[ 36], 40.00th=[ 44], 50.00th=[ 55], 60.00th=[ 72], 00:25:55.851 | 70.00th=[ 111], 80.00th=[ 247], 90.00th=[ 422], 95.00th=[ 659], 00:25:55.851 | 99.00th=[ 1133], 99.50th=[ 1418], 99.90th=[ 1418], 99.95th=[ 1418], 00:25:55.851 | 99.99th=[ 1418] 00:25:55.851 bw ( KiB/s): min=18432, max=316928, per=13.87%, avg=108471.90, stdev=90611.03, samples=20 00:25:55.851 iops : min= 72, max= 1238, avg=423.70, stdev=353.96, samples=20 00:25:55.851 lat (usec) : 750=0.16%, 1000=0.16% 00:25:55.851 lat (msec) : 2=0.53%, 4=0.63%, 10=5.95%, 20=7.09%, 50=32.30% 00:25:55.851 lat (msec) : 100=21.07%, 250=12.28%, 500=12.21%, 750=4.00%, 1000=2.12% 00:25:55.851 lat (msec) : 2000=1.49% 00:25:55.851 cpu : usr=0.14%, sys=1.40%, ctx=1698, majf=0, minf=4097 00:25:55.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:55.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.851 issued rwts: total=4300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.851 job9: (groupid=0, jobs=1): err= 0: pid=3433276: Mon Dec 16 05:54:28 2024 00:25:55.851 read: IOPS=182, BW=45.6MiB/s (47.8MB/s)(461MiB/10109msec) 00:25:55.851 slat (usec): min=16, max=305761, avg=3934.13, stdev=20515.49 00:25:55.851 clat (msec): min=31, max=1153, avg=346.83, stdev=232.29 00:25:55.851 lat (msec): min=31, max=1153, avg=350.77, stdev=233.86 00:25:55.851 clat percentiles (msec): 00:25:55.851 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 83], 20.00th=[ 186], 00:25:55.851 | 30.00th=[ 232], 40.00th=[ 257], 50.00th=[ 279], 60.00th=[ 305], 00:25:55.851 | 70.00th=[ 359], 80.00th=[ 567], 90.00th=[ 718], 95.00th=[ 785], 00:25:55.851 | 99.00th=[ 1053], 99.50th=[ 1053], 99.90th=[ 1150], 99.95th=[ 1150], 00:25:55.851 | 99.99th=[ 1150] 00:25:55.851 bw ( KiB/s): min=15360, max=134656, per=5.82%, avg=45516.80, stdev=28468.64, samples=20 00:25:55.851 iops : min= 60, max= 526, avg=177.80, stdev=111.21, samples=20 00:25:55.851 lat (msec) : 50=1.74%, 100=9.55%, 250=26.28%, 500=40.01%, 750=14.33% 00:25:55.851 lat (msec) : 1000=5.97%, 2000=2.12% 00:25:55.851 cpu : usr=0.02%, sys=0.88%, ctx=364, majf=0, minf=4097 00:25:55.851 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:25:55.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.851 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.851 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.851 job10: (groupid=0, jobs=1): err= 0: pid=3433277: Mon Dec 16 05:54:28 2024 00:25:55.851 read: IOPS=368, BW=92.1MiB/s (96.6MB/s)(932MiB/10114msec) 00:25:55.851 slat (usec): min=7, max=361290, avg=1593.53, stdev=10699.52 00:25:55.851 clat (usec): min=1623, max=1174.9k, avg=171863.45, stdev=187303.79 00:25:55.851 lat (usec): min=1669, max=1174.9k, avg=173456.98, stdev=188949.86 00:25:55.851 clat percentiles (msec): 00:25:55.851 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 21], 00:25:55.851 | 30.00th=[ 24], 40.00th=[ 36], 50.00th=[ 77], 60.00th=[ 222], 00:25:55.851 | 70.00th=[ 266], 80.00th=[ 305], 90.00th=[ 368], 95.00th=[ 531], 00:25:55.851 | 99.00th=[ 726], 99.50th=[ 1116], 99.90th=[ 1133], 99.95th=[ 1183], 00:25:55.851 | 99.99th=[ 1183] 00:25:55.851 bw ( KiB/s): min=12288, max=349696, per=11.99%, avg=93798.40, stdev=103590.17, samples=20 00:25:55.851 iops : min= 48, max= 1366, avg=366.40, stdev=404.65, samples=20 00:25:55.851 lat (msec) : 2=0.13%, 4=0.24%, 10=5.90%, 20=12.50%, 50=24.70% 00:25:55.851 lat (msec) : 100=9.15%, 250=13.79%, 500=27.66%, 750=5.02%, 1000=0.30% 00:25:55.851 lat (msec) : 2000=0.62% 00:25:55.851 cpu : usr=0.09%, sys=1.31%, ctx=968, majf=0, minf=4097 00:25:55.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:55.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.851 issued rwts: total=3728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.851 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.851 00:25:55.851 Run status group 0 (all jobs): 00:25:55.851 READ: bw=764MiB/s (801MB/s), 40.9MiB/s-114MiB/s (42.9MB/s-119MB/s), io=7731MiB (8106MB), run=10080-10123msec 00:25:55.851 00:25:55.851 Disk stats (read/write): 00:25:55.851 nvme0n1: ios=7263/0, merge=0/0, ticks=1233681/0, in_queue=1233681, util=95.12% 00:25:55.851 nvme10n1: ios=5911/0, merge=0/0, ticks=1233328/0, in_queue=1233328, util=95.53% 00:25:55.851 nvme1n1: ios=3165/0, merge=0/0, ticks=1230696/0, in_queue=1230696, util=96.14% 00:25:55.851 nvme2n1: ios=3119/0, merge=0/0, ticks=1234956/0, in_queue=1234956, util=96.48% 00:25:55.851 nvme3n1: ios=4929/0, merge=0/0, ticks=1233772/0, in_queue=1233772, util=96.68% 00:25:55.851 nvme4n1: ios=3692/0, merge=0/0, ticks=1232968/0, in_queue=1232968, util=97.45% 00:25:55.851 nvme5n1: ios=9074/0, merge=0/0, ticks=1227677/0, in_queue=1227677, util=97.81% 00:25:55.851 nvme6n1: ios=3757/0, merge=0/0, ticks=1233074/0, in_queue=1233074, util=98.10% 00:25:55.851 nvme7n1: ios=8466/0, merge=0/0, ticks=1223441/0, in_queue=1223441, util=98.93% 00:25:55.851 nvme8n1: ios=3535/0, merge=0/0, ticks=1234102/0, in_queue=1234102, util=99.12% 00:25:55.852 nvme9n1: ios=7286/0, merge=0/0, ticks=1233203/0, in_queue=1233203, util=99.25% 00:25:55.852 05:54:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:55.852 [global] 00:25:55.852 thread=1 00:25:55.852 invalidate=1 00:25:55.852 rw=randwrite 00:25:55.852 time_based=1 00:25:55.852 runtime=10 00:25:55.852 ioengine=libaio 00:25:55.852 direct=1 00:25:55.852 bs=262144 00:25:55.852 iodepth=64 00:25:55.852 norandommap=1 00:25:55.852 numjobs=1 00:25:55.852 00:25:55.852 [job0] 00:25:55.852 filename=/dev/nvme0n1 00:25:55.852 [job1] 00:25:55.852 filename=/dev/nvme10n1 00:25:55.852 [job2] 00:25:55.852 filename=/dev/nvme1n1 00:25:55.852 [job3] 00:25:55.852 filename=/dev/nvme2n1 00:25:55.852 [job4] 00:25:55.852 filename=/dev/nvme3n1 00:25:55.852 [job5] 00:25:55.852 filename=/dev/nvme4n1 00:25:55.852 [job6] 00:25:55.852 filename=/dev/nvme5n1 00:25:55.852 [job7] 00:25:55.852 filename=/dev/nvme6n1 00:25:55.852 [job8] 00:25:55.852 filename=/dev/nvme7n1 00:25:55.852 [job9] 00:25:55.852 filename=/dev/nvme8n1 00:25:55.852 [job10] 00:25:55.852 filename=/dev/nvme9n1 00:25:55.852 Could not set queue depth (nvme0n1) 00:25:55.852 Could not set queue depth (nvme10n1) 00:25:55.852 Could not set queue depth (nvme1n1) 00:25:55.852 Could not set queue depth (nvme2n1) 00:25:55.852 Could not set queue depth (nvme3n1) 00:25:55.852 Could not set queue depth (nvme4n1) 00:25:55.852 Could not set queue depth (nvme5n1) 00:25:55.852 Could not set queue depth (nvme6n1) 00:25:55.852 Could not set queue depth (nvme7n1) 00:25:55.852 Could not set queue depth (nvme8n1) 00:25:55.852 Could not set queue depth (nvme9n1) 00:25:55.852 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.852 fio-3.35 00:25:55.852 Starting 11 threads 00:26:05.833 00:26:05.833 job0: (groupid=0, jobs=1): err= 0: pid=3434297: Mon Dec 16 05:54:39 2024 00:26:05.833 write: IOPS=547, BW=137MiB/s (144MB/s)(1409MiB/10294msec); 0 zone resets 00:26:05.833 slat (usec): min=26, max=169789, avg=1435.31, stdev=4987.16 00:26:05.833 clat (usec): min=936, max=727069, avg=115367.60, stdev=132218.52 00:26:05.833 lat (usec): min=991, max=727110, avg=116802.90, stdev=133552.64 00:26:05.833 clat percentiles (msec): 00:26:05.833 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 40], 20.00th=[ 43], 00:26:05.833 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 50], 00:26:05.833 | 70.00th=[ 92], 80.00th=[ 207], 90.00th=[ 363], 95.00th=[ 422], 00:26:05.833 | 99.00th=[ 472], 99.50th=[ 542], 99.90th=[ 693], 99.95th=[ 693], 00:26:05.833 | 99.99th=[ 726] 00:26:05.833 bw ( KiB/s): min=34816, max=361472, per=12.77%, avg=142637.45, stdev=122342.86, samples=20 00:26:05.833 iops : min= 136, max= 1412, avg=557.15, stdev=477.86, samples=20 00:26:05.833 lat (usec) : 1000=0.07% 00:26:05.833 lat (msec) : 2=0.41%, 4=0.83%, 10=2.08%, 20=1.99%, 50=55.39% 00:26:05.833 lat (msec) : 100=12.99%, 250=9.39%, 500=16.25%, 750=0.60% 00:26:05.833 cpu : usr=1.37%, sys=1.67%, ctx=1937, majf=0, minf=1 00:26:05.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:05.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.833 issued rwts: total=0,5636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.833 job1: (groupid=0, jobs=1): err= 0: pid=3434309: Mon Dec 16 05:54:39 2024 00:26:05.833 write: IOPS=313, BW=78.3MiB/s (82.1MB/s)(807MiB/10304msec); 0 zone resets 00:26:05.833 slat (usec): min=30, max=218478, avg=2282.82, stdev=8089.86 00:26:05.833 clat (usec): min=923, max=851912, avg=201965.73, stdev=147281.64 00:26:05.833 lat (usec): min=996, max=851952, avg=204248.55, stdev=148906.53 00:26:05.833 clat percentiles (msec): 00:26:05.833 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 55], 20.00th=[ 93], 00:26:05.833 | 30.00th=[ 106], 40.00th=[ 120], 50.00th=[ 136], 60.00th=[ 192], 00:26:05.833 | 70.00th=[ 266], 80.00th=[ 317], 90.00th=[ 435], 95.00th=[ 472], 00:26:05.833 | 99.00th=[ 609], 99.50th=[ 735], 99.90th=[ 827], 99.95th=[ 852], 00:26:05.833 | 99.99th=[ 852] 00:26:05.833 bw ( KiB/s): min=26624, max=202240, per=7.24%, avg=80933.85, stdev=49326.17, samples=20 00:26:05.833 iops : min= 104, max= 790, avg=316.10, stdev=192.63, samples=20 00:26:05.833 lat (usec) : 1000=0.03% 00:26:05.833 lat (msec) : 2=0.15%, 4=0.65%, 10=0.84%, 20=0.37%, 50=6.88% 00:26:05.833 lat (msec) : 100=17.14%, 250=39.86%, 500=30.16%, 750=3.47%, 1000=0.43% 00:26:05.833 cpu : usr=0.59%, sys=1.20%, ctx=1570, majf=0, minf=1 00:26:05.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:05.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.833 issued rwts: total=0,3226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.833 job2: (groupid=0, jobs=1): err= 0: pid=3434314: Mon Dec 16 05:54:39 2024 00:26:05.833 write: IOPS=404, BW=101MiB/s (106MB/s)(1042MiB/10303msec); 0 zone resets 00:26:05.833 slat (usec): min=17, max=149769, avg=2156.28, stdev=6681.12 00:26:05.833 clat (msec): min=13, max=726, avg=155.93, stdev=142.29 00:26:05.833 lat (msec): min=13, max=726, avg=158.09, stdev=144.06 00:26:05.833 clat percentiles (msec): 00:26:05.833 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 46], 00:26:05.833 | 30.00th=[ 48], 40.00th=[ 79], 50.00th=[ 94], 60.00th=[ 129], 00:26:05.833 | 70.00th=[ 178], 80.00th=[ 264], 90.00th=[ 430], 95.00th=[ 489], 00:26:05.833 | 99.00th=[ 558], 99.50th=[ 617], 99.90th=[ 693], 99.95th=[ 726], 00:26:05.833 | 99.99th=[ 726] 00:26:05.833 bw ( KiB/s): min=30720, max=354816, per=9.40%, avg=105043.70, stdev=94103.54, samples=20 00:26:05.833 iops : min= 120, max= 1386, avg=410.30, stdev=367.57, samples=20 00:26:05.833 lat (msec) : 20=0.07%, 50=31.55%, 100=20.30%, 250=25.72%, 500=18.40% 00:26:05.833 lat (msec) : 750=3.96% 00:26:05.833 cpu : usr=0.90%, sys=1.19%, ctx=1265, majf=0, minf=1 00:26:05.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:05.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.833 issued rwts: total=0,4168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.833 job3: (groupid=0, jobs=1): err= 0: pid=3434317: Mon Dec 16 05:54:39 2024 00:26:05.833 write: IOPS=380, BW=95.1MiB/s (99.7MB/s)(979MiB/10297msec); 0 zone resets 00:26:05.833 slat (usec): min=17, max=202265, avg=1994.01, stdev=6644.70 00:26:05.833 clat (usec): min=1225, max=717219, avg=166226.50, stdev=143482.18 00:26:05.833 lat (usec): min=1880, max=717264, avg=168220.51, stdev=145296.93 00:26:05.833 clat percentiles (msec): 00:26:05.833 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 47], 00:26:05.833 | 30.00th=[ 71], 40.00th=[ 90], 50.00th=[ 136], 60.00th=[ 150], 00:26:05.833 | 70.00th=[ 176], 80.00th=[ 305], 90.00th=[ 414], 95.00th=[ 451], 00:26:05.833 | 99.00th=[ 558], 99.50th=[ 609], 99.90th=[ 684], 99.95th=[ 718], 00:26:05.833 | 99.99th=[ 718] 00:26:05.833 bw ( KiB/s): min=32768, max=365568, per=8.82%, avg=98581.25, stdev=78542.20, samples=20 00:26:05.833 iops : min= 128, max= 1428, avg=385.05, stdev=306.83, samples=20 00:26:05.833 lat (msec) : 2=0.08%, 4=0.56%, 10=2.35%, 20=4.39%, 50=16.60% 00:26:05.833 lat (msec) : 100=18.80%, 250=34.76%, 500=20.49%, 750=1.97% 00:26:05.833 cpu : usr=0.86%, sys=1.14%, ctx=2062, majf=0, minf=1 00:26:05.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:05.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.833 issued rwts: total=0,3915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.833 job4: (groupid=0, jobs=1): err= 0: pid=3434318: Mon Dec 16 05:54:39 2024 00:26:05.833 write: IOPS=410, BW=103MiB/s (108MB/s)(1032MiB/10058msec); 0 zone resets 00:26:05.833 slat (usec): min=20, max=103581, avg=1646.28, stdev=5058.60 00:26:05.833 clat (usec): min=860, max=499312, avg=154280.13, stdev=116618.48 00:26:05.833 lat (usec): min=910, max=499363, avg=155926.42, stdev=117653.02 00:26:05.833 clat percentiles (msec): 00:26:05.833 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 54], 00:26:05.833 | 30.00th=[ 71], 40.00th=[ 96], 50.00th=[ 140], 60.00th=[ 153], 00:26:05.833 | 70.00th=[ 180], 80.00th=[ 241], 90.00th=[ 355], 95.00th=[ 409], 00:26:05.833 | 99.00th=[ 460], 99.50th=[ 485], 99.90th=[ 498], 99.95th=[ 498], 00:26:05.833 | 99.99th=[ 502] 00:26:05.833 bw ( KiB/s): min=37376, max=270336, per=9.31%, avg=104033.90, stdev=67799.82, samples=20 00:26:05.833 iops : min= 146, max= 1056, avg=406.35, stdev=264.87, samples=20 00:26:05.833 lat (usec) : 1000=0.05% 00:26:05.833 lat (msec) : 2=0.46%, 4=0.68%, 10=1.33%, 20=2.40%, 50=11.95% 00:26:05.833 lat (msec) : 100=23.79%, 250=40.44%, 500=18.90% 00:26:05.833 cpu : usr=0.99%, sys=1.30%, ctx=2189, majf=0, minf=1 00:26:05.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:05.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.833 issued rwts: total=0,4127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.833 job5: (groupid=0, jobs=1): err= 0: pid=3434319: Mon Dec 16 05:54:39 2024 00:26:05.833 write: IOPS=261, BW=65.3MiB/s (68.5MB/s)(672MiB/10288msec); 0 zone resets 00:26:05.833 slat (usec): min=21, max=61386, avg=2939.03, stdev=7356.08 00:26:05.833 clat (usec): min=1076, max=694682, avg=241843.30, stdev=142266.74 00:26:05.833 lat (usec): min=1146, max=727273, avg=244782.33, stdev=144000.33 00:26:05.833 clat percentiles (msec): 00:26:05.833 | 1.00th=[ 3], 5.00th=[ 20], 10.00th=[ 55], 20.00th=[ 114], 00:26:05.833 | 30.00th=[ 134], 40.00th=[ 188], 50.00th=[ 245], 60.00th=[ 266], 00:26:05.833 | 70.00th=[ 334], 80.00th=[ 397], 90.00th=[ 435], 95.00th=[ 451], 00:26:05.833 | 99.00th=[ 535], 99.50th=[ 617], 99.90th=[ 693], 99.95th=[ 693], 00:26:05.833 | 99.99th=[ 693] 00:26:05.833 bw ( KiB/s): min=32768, max=167936, per=6.01%, avg=67184.90, stdev=36752.84, samples=20 00:26:05.833 iops : min= 128, max= 656, avg=262.40, stdev=143.47, samples=20 00:26:05.833 lat (msec) : 2=0.60%, 4=1.12%, 10=2.01%, 20=1.38%, 50=4.58% 00:26:05.834 lat (msec) : 100=5.54%, 250=37.28%, 500=46.24%, 750=1.26% 00:26:05.834 cpu : usr=0.67%, sys=0.97%, ctx=1300, majf=0, minf=1 00:26:05.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:05.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.834 issued rwts: total=0,2688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.834 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.834 job6: (groupid=0, jobs=1): err= 0: pid=3434320: Mon Dec 16 05:54:39 2024 00:26:05.834 write: IOPS=423, BW=106MiB/s (111MB/s)(1078MiB/10186msec); 0 zone resets 00:26:05.834 slat (usec): min=29, max=55330, avg=1999.28, stdev=4811.00 00:26:05.834 clat (usec): min=1849, max=545181, avg=149128.82, stdev=101344.98 00:26:05.834 lat (usec): min=1908, max=545230, avg=151128.10, stdev=102404.65 00:26:05.834 clat percentiles (msec): 00:26:05.834 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 74], 00:26:05.834 | 30.00th=[ 81], 40.00th=[ 115], 50.00th=[ 138], 60.00th=[ 153], 00:26:05.834 | 70.00th=[ 167], 80.00th=[ 197], 90.00th=[ 317], 95.00th=[ 388], 00:26:05.834 | 99.00th=[ 443], 99.50th=[ 451], 99.90th=[ 514], 99.95th=[ 527], 00:26:05.834 | 99.99th=[ 542] 00:26:05.834 bw ( KiB/s): min=36864, max=263168, per=9.73%, avg=108744.50, stdev=62656.09, samples=20 00:26:05.834 iops : min= 144, max= 1028, avg=424.75, stdev=244.79, samples=20 00:26:05.834 lat (msec) : 2=0.02%, 4=0.14%, 10=0.97%, 20=1.35%, 50=12.29% 00:26:05.834 lat (msec) : 100=23.13%, 250=48.50%, 500=13.45%, 750=0.14% 00:26:05.834 cpu : usr=0.90%, sys=1.33%, ctx=1640, majf=0, minf=1 00:26:05.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:05.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.834 issued rwts: total=0,4311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.834 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.834 job7: (groupid=0, jobs=1): err= 0: pid=3434323: Mon Dec 16 05:54:39 2024 00:26:05.834 write: IOPS=265, BW=66.5MiB/s (69.7MB/s)(685MiB/10297msec); 0 zone resets 00:26:05.834 slat (usec): min=24, max=68882, avg=3216.99, stdev=7712.84 00:26:05.834 clat (usec): min=706, max=714821, avg=237303.51, stdev=151993.44 00:26:05.834 lat (usec): min=750, max=714865, avg=240520.50, stdev=154112.37 00:26:05.834 clat percentiles (usec): 00:26:05.834 | 1.00th=[ 1614], 5.00th=[ 5669], 10.00th=[ 23462], 20.00th=[ 96994], 00:26:05.834 | 30.00th=[147850], 40.00th=[154141], 50.00th=[204473], 60.00th=[278922], 00:26:05.834 | 70.00th=[354419], 80.00th=[404751], 90.00th=[438305], 95.00th=[455082], 00:26:05.834 | 99.00th=[526386], 99.50th=[633340], 99.90th=[683672], 99.95th=[717226], 00:26:05.834 | 99.99th=[717226] 00:26:05.834 bw ( KiB/s): min=32768, max=244224, per=6.13%, avg=68459.65, stdev=49835.03, samples=20 00:26:05.834 iops : min= 128, max= 954, avg=267.35, stdev=194.67, samples=20 00:26:05.834 lat (usec) : 750=0.04%, 1000=0.26% 00:26:05.834 lat (msec) : 2=1.31%, 4=1.28%, 10=3.43%, 20=2.92%, 50=3.91% 00:26:05.834 lat (msec) : 100=7.30%, 250=34.44%, 500=43.68%, 750=1.42% 00:26:05.834 cpu : usr=0.69%, sys=0.90%, ctx=1219, majf=0, minf=1 00:26:05.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:05.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.834 issued rwts: total=0,2738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.834 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.834 job8: (groupid=0, jobs=1): err= 0: pid=3434324: Mon Dec 16 05:54:39 2024 00:26:05.834 write: IOPS=412, BW=103MiB/s (108MB/s)(1063MiB/10297msec); 0 zone resets 00:26:05.834 slat (usec): min=20, max=241606, avg=2287.07, stdev=7811.73 00:26:05.834 clat (usec): min=1102, max=768247, avg=152646.56, stdev=139387.65 00:26:05.834 lat (usec): min=1641, max=768315, avg=154933.62, stdev=141190.00 00:26:05.834 clat percentiles (msec): 00:26:05.834 | 1.00th=[ 12], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 75], 00:26:05.834 | 30.00th=[ 86], 40.00th=[ 95], 50.00th=[ 99], 60.00th=[ 104], 00:26:05.834 | 70.00th=[ 132], 80.00th=[ 176], 90.00th=[ 439], 95.00th=[ 485], 00:26:05.834 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 735], 99.95th=[ 735], 00:26:05.834 | 99.99th=[ 768] 00:26:05.834 bw ( KiB/s): min=30720, max=274395, per=9.59%, avg=107134.15, stdev=68946.67, samples=20 00:26:05.834 iops : min= 120, max= 1071, avg=418.45, stdev=269.21, samples=20 00:26:05.834 lat (msec) : 2=0.05%, 4=0.16%, 10=0.71%, 20=0.99%, 50=11.20% 00:26:05.834 lat (msec) : 100=38.94%, 250=31.88%, 500=12.26%, 750=3.76%, 1000=0.05% 00:26:05.834 cpu : usr=0.99%, sys=1.12%, ctx=1226, majf=0, minf=1 00:26:05.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:05.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.834 issued rwts: total=0,4250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.834 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.834 job9: (groupid=0, jobs=1): err= 0: pid=3434325: Mon Dec 16 05:54:39 2024 00:26:05.834 write: IOPS=590, BW=148MiB/s (155MB/s)(1520MiB/10297msec); 0 zone resets 00:26:05.834 slat (usec): min=19, max=220104, avg=1293.53, stdev=5952.23 00:26:05.834 clat (usec): min=890, max=812748, avg=107004.05, stdev=101751.07 00:26:05.834 lat (usec): min=935, max=812791, avg=108297.59, stdev=102989.57 00:26:05.834 clat percentiles (msec): 00:26:05.834 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 36], 20.00th=[ 50], 00:26:05.834 | 30.00th=[ 61], 40.00th=[ 75], 50.00th=[ 87], 60.00th=[ 97], 00:26:05.834 | 70.00th=[ 104], 80.00th=[ 128], 90.00th=[ 169], 95.00th=[ 347], 00:26:05.834 | 99.00th=[ 542], 99.50th=[ 584], 99.90th=[ 751], 99.95th=[ 785], 00:26:05.834 | 99.99th=[ 810] 00:26:05.834 bw ( KiB/s): min=30720, max=305152, per=13.79%, avg=154005.65, stdev=76669.67, samples=20 00:26:05.834 iops : min= 120, max= 1192, avg=601.55, stdev=299.55, samples=20 00:26:05.834 lat (usec) : 1000=0.02% 00:26:05.834 lat (msec) : 2=0.08%, 4=0.41%, 10=2.06%, 20=3.22%, 50=14.87% 00:26:05.834 lat (msec) : 100=44.00%, 250=29.05%, 500=4.19%, 750=1.94%, 1000=0.16% 00:26:05.834 cpu : usr=1.17%, sys=1.81%, ctx=2797, majf=0, minf=1 00:26:05.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:05.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.834 issued rwts: total=0,6080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.834 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.834 job10: (groupid=0, jobs=1): err= 0: pid=3434326: Mon Dec 16 05:54:39 2024 00:26:05.834 write: IOPS=372, BW=93.0MiB/s (97.5MB/s)(959MiB/10306msec); 0 zone resets 00:26:05.834 slat (usec): min=22, max=88007, avg=2161.78, stdev=5546.68 00:26:05.834 clat (usec): min=943, max=653243, avg=169723.29, stdev=114420.00 00:26:05.834 lat (usec): min=1016, max=657164, avg=171885.06, stdev=115624.08 00:26:05.834 clat percentiles (msec): 00:26:05.834 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 54], 20.00th=[ 80], 00:26:05.834 | 30.00th=[ 106], 40.00th=[ 133], 50.00th=[ 144], 60.00th=[ 159], 00:26:05.834 | 70.00th=[ 184], 80.00th=[ 262], 90.00th=[ 368], 95.00th=[ 405], 00:26:05.834 | 99.00th=[ 542], 99.50th=[ 617], 99.90th=[ 651], 99.95th=[ 651], 00:26:05.834 | 99.99th=[ 651] 00:26:05.834 bw ( KiB/s): min=36864, max=289280, per=8.64%, avg=96533.30, stdev=59176.27, samples=20 00:26:05.834 iops : min= 144, max= 1130, avg=377.05, stdev=231.19, samples=20 00:26:05.834 lat (usec) : 1000=0.03% 00:26:05.834 lat (msec) : 2=0.26%, 4=0.21%, 10=0.60%, 20=1.85%, 50=5.29% 00:26:05.834 lat (msec) : 100=19.66%, 250=51.03%, 500=19.92%, 750=1.15% 00:26:05.834 cpu : usr=0.89%, sys=1.09%, ctx=1573, majf=0, minf=1 00:26:05.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:05.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.834 issued rwts: total=0,3835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.834 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.834 00:26:05.834 Run status group 0 (all jobs): 00:26:05.834 WRITE: bw=1091MiB/s (1144MB/s), 65.3MiB/s-148MiB/s (68.5MB/s-155MB/s), io=11.0GiB (11.8GB), run=10058-10306msec 00:26:05.834 00:26:05.834 Disk stats (read/write): 00:26:05.834 nvme0n1: ios=50/11202, merge=0/0, ticks=651/1227728, in_queue=1228379, util=99.90% 00:26:05.834 nvme10n1: ios=49/6380, merge=0/0, ticks=3040/1224363, in_queue=1227403, util=100.00% 00:26:05.834 nvme1n1: ios=43/8262, merge=0/0, ticks=2298/1218791, in_queue=1221089, util=100.00% 00:26:05.834 nvme2n1: ios=48/7758, merge=0/0, ticks=271/1229414, in_queue=1229685, util=100.00% 00:26:05.834 nvme3n1: ios=0/7998, merge=0/0, ticks=0/1223929, in_queue=1223929, util=97.81% 00:26:05.834 nvme4n1: ios=0/5311, merge=0/0, ticks=0/1228175, in_queue=1228175, util=98.16% 00:26:05.834 nvme5n1: ios=0/8616, merge=0/0, ticks=0/1234819, in_queue=1234819, util=98.30% 00:26:05.834 nvme6n1: ios=0/5403, merge=0/0, ticks=0/1225093, in_queue=1225093, util=98.42% 00:26:05.834 nvme7n1: ios=45/8429, merge=0/0, ticks=2663/1190087, in_queue=1192750, util=100.00% 00:26:05.834 nvme8n1: ios=53/12089, merge=0/0, ticks=2851/1190169, in_queue=1193020, util=100.00% 00:26:05.834 nvme9n1: ios=0/7584, merge=0/0, ticks=0/1227697, in_queue=1227697, util=99.08% 00:26:05.834 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:05.834 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:05.834 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.834 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:06.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.094 05:54:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:06.661 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.661 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:06.920 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:06.920 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:06.920 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:06.920 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:06.920 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.180 05:54:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:07.180 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:07.180 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:07.180 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.180 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.180 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.438 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:07.698 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.698 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:07.957 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.957 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:08.216 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.216 05:54:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:08.216 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.217 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:08.476 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:08.476 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:08.476 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.735 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:08.736 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:08.736 rmmod nvme_tcp 00:26:08.736 rmmod nvme_fabrics 00:26:08.736 rmmod nvme_keyring 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 3426221 ']' 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 3426221 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3426221 ']' 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3426221 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:08.736 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3426221 00:26:08.995 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:08.995 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:08.995 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3426221' 00:26:08.995 killing process with pid 3426221 00:26:08.995 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3426221 00:26:08.995 05:54:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3426221 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.254 05:54:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.788 00:26:11.788 real 1m11.225s 00:26:11.788 user 4m18.455s 00:26:11.788 sys 0m17.336s 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.788 ************************************ 00:26:11.788 END TEST nvmf_multiconnection 00:26:11.788 ************************************ 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:11.788 ************************************ 00:26:11.788 START TEST nvmf_initiator_timeout 00:26:11.788 ************************************ 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:11.788 * Looking for test storage... 00:26:11.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.788 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:11.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.789 --rc genhtml_branch_coverage=1 00:26:11.789 --rc genhtml_function_coverage=1 00:26:11.789 --rc genhtml_legend=1 00:26:11.789 --rc geninfo_all_blocks=1 00:26:11.789 --rc geninfo_unexecuted_blocks=1 00:26:11.789 00:26:11.789 ' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:11.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.789 --rc genhtml_branch_coverage=1 00:26:11.789 --rc genhtml_function_coverage=1 00:26:11.789 --rc genhtml_legend=1 00:26:11.789 --rc geninfo_all_blocks=1 00:26:11.789 --rc geninfo_unexecuted_blocks=1 00:26:11.789 00:26:11.789 ' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:11.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.789 --rc genhtml_branch_coverage=1 00:26:11.789 --rc genhtml_function_coverage=1 00:26:11.789 --rc genhtml_legend=1 00:26:11.789 --rc geninfo_all_blocks=1 00:26:11.789 --rc geninfo_unexecuted_blocks=1 00:26:11.789 00:26:11.789 ' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:11.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.789 --rc genhtml_branch_coverage=1 00:26:11.789 --rc genhtml_function_coverage=1 00:26:11.789 --rc genhtml_legend=1 00:26:11.789 --rc geninfo_all_blocks=1 00:26:11.789 --rc geninfo_unexecuted_blocks=1 00:26:11.789 00:26:11.789 ' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.789 05:54:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:17.062 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:17.062 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.062 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:17.063 Found net devices under 0000:af:00.0: cvl_0_0 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:17.063 Found net devices under 0000:af:00.1: cvl_0_1 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:26:17.063 00:26:17.063 --- 10.0.0.2 ping statistics --- 00:26:17.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.063 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:26:17.063 00:26:17.063 --- 10.0.0.1 ping statistics --- 00:26:17.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.063 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=3439498 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 3439498 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3439498 ']' 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.063 05:54:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.063 [2024-12-16 05:54:50.837425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:17.063 [2024-12-16 05:54:50.837467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.063 [2024-12-16 05:54:50.896856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.321 [2024-12-16 05:54:50.937794] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.321 [2024-12-16 05:54:50.937829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.321 [2024-12-16 05:54:50.937837] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.321 [2024-12-16 05:54:50.937843] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.321 [2024-12-16 05:54:50.937866] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.321 [2024-12-16 05:54:50.937911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.321 [2024-12-16 05:54:50.937995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.321 [2024-12-16 05:54:50.938086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.321 [2024-12-16 05:54:50.938087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.321 Malloc0 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.321 Delay0 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.321 [2024-12-16 05:54:51.123350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.321 [2024-12-16 05:54:51.152695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.321 05:54:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:18.694 05:54:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:18.694 05:54:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:18.694 05:54:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.694 05:54:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:18.694 05:54:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3440123 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:20.593 05:54:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:20.593 [global] 00:26:20.593 thread=1 00:26:20.593 invalidate=1 00:26:20.593 rw=write 00:26:20.593 time_based=1 00:26:20.593 runtime=60 00:26:20.593 ioengine=libaio 00:26:20.593 direct=1 00:26:20.593 bs=4096 00:26:20.593 iodepth=1 00:26:20.593 norandommap=0 00:26:20.593 numjobs=1 00:26:20.593 00:26:20.593 verify_dump=1 00:26:20.593 verify_backlog=512 00:26:20.593 verify_state_save=0 00:26:20.593 do_verify=1 00:26:20.593 verify=crc32c-intel 00:26:20.593 [job0] 00:26:20.593 filename=/dev/nvme0n1 00:26:20.593 Could not set queue depth (nvme0n1) 00:26:20.851 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.851 fio-3.35 00:26:20.851 Starting 1 thread 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.140 true 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.140 true 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.140 true 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.140 true 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.140 05:54:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.669 true 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.669 true 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.669 true 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.669 true 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:26.669 05:55:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3440123 00:27:22.896 00:27:22.896 job0: (groupid=0, jobs=1): err= 0: pid=3440246: Mon Dec 16 05:55:54 2024 00:27:22.896 read: IOPS=103, BW=415KiB/s (425kB/s)(24.3MiB/60021msec) 00:27:22.896 slat (nsec): min=7041, max=56391, avg=9679.56, stdev=3955.66 00:27:22.896 clat (usec): min=214, max=43452, avg=2728.70, stdev=9695.04 00:27:22.896 lat (usec): min=223, max=43476, avg=2738.38, stdev=9698.54 00:27:22.896 clat percentiles (usec): 00:27:22.896 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:27:22.896 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:27:22.896 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 416], 95.00th=[41157], 00:27:22.896 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:27:22.896 | 99.99th=[43254] 00:27:22.896 write: IOPS=110, BW=444KiB/s (454kB/s)(26.0MiB/60021msec); 0 zone resets 00:27:22.896 slat (usec): min=10, max=23912, avg=16.02, stdev=292.96 00:27:22.896 clat (usec): min=156, max=41503k, avg=6430.63, stdev=508706.18 00:27:22.896 lat (usec): min=169, max=41503k, avg=6446.65, stdev=508706.21 00:27:22.896 clat percentiles (usec): 00:27:22.896 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 178], 00:27:22.896 | 20.00th=[ 184], 30.00th=[ 188], 40.00th=[ 190], 00:27:22.896 | 50.00th=[ 194], 60.00th=[ 196], 70.00th=[ 202], 00:27:22.896 | 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 227], 00:27:22.896 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 343], 00:27:22.896 | 99.95th=[ 383], 99.99th=[17112761] 00:27:22.896 bw ( KiB/s): min= 840, max= 8192, per=100.00%, avg=5916.44, stdev=3253.62, samples=9 00:27:22.896 iops : min= 210, max= 2048, avg=1479.11, stdev=813.41, samples=9 00:27:22.896 lat (usec) : 250=64.46%, 500=32.56%, 750=0.02% 00:27:22.896 lat (msec) : 2=0.03%, 4=0.01%, 50=2.92%, >=2000=0.01% 00:27:22.896 cpu : usr=0.21%, sys=0.36%, ctx=12892, majf=0, minf=1 00:27:22.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.896 issued rwts: total=6232,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:22.896 00:27:22.896 Run status group 0 (all jobs): 00:27:22.896 READ: bw=415KiB/s (425kB/s), 415KiB/s-415KiB/s (425kB/s-425kB/s), io=24.3MiB (25.5MB), run=60021-60021msec 00:27:22.896 WRITE: bw=444KiB/s (454kB/s), 444KiB/s-444KiB/s (454kB/s-454kB/s), io=26.0MiB (27.3MB), run=60021-60021msec 00:27:22.896 00:27:22.896 Disk stats (read/write): 00:27:22.896 nvme0n1: ios=6283/6656, merge=0/0, ticks=17244/1246, in_queue=18490, util=99.71% 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:22.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:22.896 nvmf hotplug test: fio successful as expected 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.896 rmmod nvme_tcp 00:27:22.896 rmmod nvme_fabrics 00:27:22.896 rmmod nvme_keyring 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 3439498 ']' 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 3439498 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3439498 ']' 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3439498 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.896 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3439498 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3439498' 00:27:22.896 killing process with pid 3439498 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3439498 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3439498 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.896 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.464 00:27:23.464 real 1m12.116s 00:27:23.464 user 4m22.628s 00:27:23.464 sys 0m6.117s 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.464 ************************************ 00:27:23.464 END TEST nvmf_initiator_timeout 00:27:23.464 ************************************ 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.464 05:55:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:28.736 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:28.737 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:28.737 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:28.737 Found net devices under 0000:af:00.0: cvl_0_0 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:28.737 Found net devices under 0000:af:00.1: cvl_0_1 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:28.737 ************************************ 00:27:28.737 START TEST nvmf_perf_adq 00:27:28.737 ************************************ 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:28.737 * Looking for test storage... 00:27:28.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:27:28.737 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.997 --rc genhtml_branch_coverage=1 00:27:28.997 --rc genhtml_function_coverage=1 00:27:28.997 --rc genhtml_legend=1 00:27:28.997 --rc geninfo_all_blocks=1 00:27:28.997 --rc geninfo_unexecuted_blocks=1 00:27:28.997 00:27:28.997 ' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.997 --rc genhtml_branch_coverage=1 00:27:28.997 --rc genhtml_function_coverage=1 00:27:28.997 --rc genhtml_legend=1 00:27:28.997 --rc geninfo_all_blocks=1 00:27:28.997 --rc geninfo_unexecuted_blocks=1 00:27:28.997 00:27:28.997 ' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.997 --rc genhtml_branch_coverage=1 00:27:28.997 --rc genhtml_function_coverage=1 00:27:28.997 --rc genhtml_legend=1 00:27:28.997 --rc geninfo_all_blocks=1 00:27:28.997 --rc geninfo_unexecuted_blocks=1 00:27:28.997 00:27:28.997 ' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.997 --rc genhtml_branch_coverage=1 00:27:28.997 --rc genhtml_function_coverage=1 00:27:28.997 --rc genhtml_legend=1 00:27:28.997 --rc geninfo_all_blocks=1 00:27:28.997 --rc geninfo_unexecuted_blocks=1 00:27:28.997 00:27:28.997 ' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.997 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:28.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:28.998 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.998 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.998 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.998 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:28.998 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.998 05:56:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:34.268 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:34.268 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:34.268 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:34.269 Found net devices under 0000:af:00.0: cvl_0_0 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:34.269 Found net devices under 0000:af:00.1: cvl_0_1 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:34.269 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:35.206 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:37.741 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:43.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:43.095 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:43.095 Found net devices under 0000:af:00.0: cvl_0_0 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:43.095 Found net devices under 0000:af:00.1: cvl_0_1 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.095 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:43.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:27:43.096 00:27:43.096 --- 10.0.0.2 ping statistics --- 00:27:43.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.096 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:27:43.096 00:27:43.096 --- 10.0.0.1 ping statistics --- 00:27:43.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.096 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3457746 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3457746 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3457746 ']' 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 [2024-12-16 05:56:16.491775] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:43.096 [2024-12-16 05:56:16.491816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.096 [2024-12-16 05:56:16.552336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.096 [2024-12-16 05:56:16.593140] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.096 [2024-12-16 05:56:16.593178] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.096 [2024-12-16 05:56:16.593185] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.096 [2024-12-16 05:56:16.593192] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.096 [2024-12-16 05:56:16.593197] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.096 [2024-12-16 05:56:16.593240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.096 [2024-12-16 05:56:16.593262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.096 [2024-12-16 05:56:16.593348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.096 [2024-12-16 05:56:16.593350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 [2024-12-16 05:56:16.811423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 Malloc1 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 [2024-12-16 05:56:16.857662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3457771 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:43.096 05:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:45.645 "tick_rate": 2100000000, 00:27:45.645 "poll_groups": [ 00:27:45.645 { 00:27:45.645 "name": "nvmf_tgt_poll_group_000", 00:27:45.645 "admin_qpairs": 1, 00:27:45.645 "io_qpairs": 1, 00:27:45.645 "current_admin_qpairs": 1, 00:27:45.645 "current_io_qpairs": 1, 00:27:45.645 "pending_bdev_io": 0, 00:27:45.645 "completed_nvme_io": 19635, 00:27:45.645 "transports": [ 00:27:45.645 { 00:27:45.645 "trtype": "TCP" 00:27:45.645 } 00:27:45.645 ] 00:27:45.645 }, 00:27:45.645 { 00:27:45.645 "name": "nvmf_tgt_poll_group_001", 00:27:45.645 "admin_qpairs": 0, 00:27:45.645 "io_qpairs": 1, 00:27:45.645 "current_admin_qpairs": 0, 00:27:45.645 "current_io_qpairs": 1, 00:27:45.645 "pending_bdev_io": 0, 00:27:45.645 "completed_nvme_io": 19955, 00:27:45.645 "transports": [ 00:27:45.645 { 00:27:45.645 "trtype": "TCP" 00:27:45.645 } 00:27:45.645 ] 00:27:45.645 }, 00:27:45.645 { 00:27:45.645 "name": "nvmf_tgt_poll_group_002", 00:27:45.645 "admin_qpairs": 0, 00:27:45.645 "io_qpairs": 1, 00:27:45.645 "current_admin_qpairs": 0, 00:27:45.645 "current_io_qpairs": 1, 00:27:45.645 "pending_bdev_io": 0, 00:27:45.645 "completed_nvme_io": 20081, 00:27:45.645 "transports": [ 00:27:45.645 { 00:27:45.645 "trtype": "TCP" 00:27:45.645 } 00:27:45.645 ] 00:27:45.645 }, 00:27:45.645 { 00:27:45.645 "name": "nvmf_tgt_poll_group_003", 00:27:45.645 "admin_qpairs": 0, 00:27:45.645 "io_qpairs": 1, 00:27:45.645 "current_admin_qpairs": 0, 00:27:45.645 "current_io_qpairs": 1, 00:27:45.645 "pending_bdev_io": 0, 00:27:45.645 "completed_nvme_io": 19804, 00:27:45.645 "transports": [ 00:27:45.645 { 00:27:45.645 "trtype": "TCP" 00:27:45.645 } 00:27:45.645 ] 00:27:45.645 } 00:27:45.645 ] 00:27:45.645 }' 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:45.645 05:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3457771 00:27:53.762 Initializing NVMe Controllers 00:27:53.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:53.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:53.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:53.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:53.762 Initialization complete. Launching workers. 00:27:53.762 ======================================================== 00:27:53.762 Latency(us) 00:27:53.762 Device Information : IOPS MiB/s Average min max 00:27:53.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10209.80 39.88 6268.22 2157.19 10629.60 00:27:53.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10310.30 40.27 6208.07 1946.46 11113.53 00:27:53.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10398.60 40.62 6153.62 1413.58 10574.94 00:27:53.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10176.50 39.75 6289.90 2336.43 10920.25 00:27:53.762 ======================================================== 00:27:53.762 Total : 41095.19 160.53 6229.50 1413.58 11113.53 00:27:53.762 00:27:53.762 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:53.762 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:53.762 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:53.762 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.762 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:53.762 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.762 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.762 rmmod nvme_tcp 00:27:53.762 rmmod nvme_fabrics 00:27:53.762 rmmod nvme_keyring 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3457746 ']' 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3457746 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3457746 ']' 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3457746 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457746 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457746' 00:27:53.762 killing process with pid 3457746 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3457746 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3457746 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.762 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.666 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.666 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:55.666 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:55.666 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:57.042 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:59.578 05:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:04.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:04.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:04.848 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:04.849 Found net devices under 0000:af:00.0: cvl_0_0 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:04.849 Found net devices under 0000:af:00.1: cvl_0_1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:04.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:28:04.849 00:28:04.849 --- 10.0.0.2 ping statistics --- 00:28:04.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.849 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:28:04.849 00:28:04.849 --- 10.0.0.1 ping statistics --- 00:28:04.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.849 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:04.849 net.core.busy_poll = 1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:04.849 net.core.busy_read = 1 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:04.849 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3461691 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3461691 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3461691 ']' 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.108 [2024-12-16 05:56:38.810954] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:05.108 [2024-12-16 05:56:38.810999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.108 [2024-12-16 05:56:38.871549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.108 [2024-12-16 05:56:38.913357] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.108 [2024-12-16 05:56:38.913395] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.108 [2024-12-16 05:56:38.913402] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.108 [2024-12-16 05:56:38.913409] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.108 [2024-12-16 05:56:38.913414] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.108 [2024-12-16 05:56:38.913460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.108 [2024-12-16 05:56:38.913543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.108 [2024-12-16 05:56:38.913631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.108 [2024-12-16 05:56:38.913631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:05.108 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.367 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.368 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:05.368 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:05.368 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:05.368 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 [2024-12-16 05:56:39.135541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 Malloc1 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.368 [2024-12-16 05:56:39.178942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3461814 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:05.368 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:07.901 "tick_rate": 2100000000, 00:28:07.901 "poll_groups": [ 00:28:07.901 { 00:28:07.901 "name": "nvmf_tgt_poll_group_000", 00:28:07.901 "admin_qpairs": 1, 00:28:07.901 "io_qpairs": 2, 00:28:07.901 "current_admin_qpairs": 1, 00:28:07.901 "current_io_qpairs": 2, 00:28:07.901 "pending_bdev_io": 0, 00:28:07.901 "completed_nvme_io": 28283, 00:28:07.901 "transports": [ 00:28:07.901 { 00:28:07.901 "trtype": "TCP" 00:28:07.901 } 00:28:07.901 ] 00:28:07.901 }, 00:28:07.901 { 00:28:07.901 "name": "nvmf_tgt_poll_group_001", 00:28:07.901 "admin_qpairs": 0, 00:28:07.901 "io_qpairs": 2, 00:28:07.901 "current_admin_qpairs": 0, 00:28:07.901 "current_io_qpairs": 2, 00:28:07.901 "pending_bdev_io": 0, 00:28:07.901 "completed_nvme_io": 28677, 00:28:07.901 "transports": [ 00:28:07.901 { 00:28:07.901 "trtype": "TCP" 00:28:07.901 } 00:28:07.901 ] 00:28:07.901 }, 00:28:07.901 { 00:28:07.901 "name": "nvmf_tgt_poll_group_002", 00:28:07.901 "admin_qpairs": 0, 00:28:07.901 "io_qpairs": 0, 00:28:07.901 "current_admin_qpairs": 0, 00:28:07.901 "current_io_qpairs": 0, 00:28:07.901 "pending_bdev_io": 0, 00:28:07.901 "completed_nvme_io": 0, 00:28:07.901 "transports": [ 00:28:07.901 { 00:28:07.901 "trtype": "TCP" 00:28:07.901 } 00:28:07.901 ] 00:28:07.901 }, 00:28:07.901 { 00:28:07.901 "name": "nvmf_tgt_poll_group_003", 00:28:07.901 "admin_qpairs": 0, 00:28:07.901 "io_qpairs": 0, 00:28:07.901 "current_admin_qpairs": 0, 00:28:07.901 "current_io_qpairs": 0, 00:28:07.901 "pending_bdev_io": 0, 00:28:07.901 "completed_nvme_io": 0, 00:28:07.901 "transports": [ 00:28:07.901 { 00:28:07.901 "trtype": "TCP" 00:28:07.901 } 00:28:07.901 ] 00:28:07.901 } 00:28:07.901 ] 00:28:07.901 }' 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:07.901 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3461814 00:28:16.017 Initializing NVMe Controllers 00:28:16.017 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:16.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:16.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:16.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:16.017 Initialization complete. Launching workers. 00:28:16.017 ======================================================== 00:28:16.017 Latency(us) 00:28:16.017 Device Information : IOPS MiB/s Average min max 00:28:16.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9321.82 36.41 6865.12 1078.50 53889.83 00:28:16.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7896.94 30.85 8104.92 1535.75 55011.24 00:28:16.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7199.54 28.12 8917.03 1578.30 54233.15 00:28:16.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5922.65 23.14 10806.06 1105.11 52945.70 00:28:16.017 ======================================================== 00:28:16.017 Total : 30340.95 118.52 8443.98 1078.50 55011.24 00:28:16.017 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.017 rmmod nvme_tcp 00:28:16.017 rmmod nvme_fabrics 00:28:16.017 rmmod nvme_keyring 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3461691 ']' 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3461691 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3461691 ']' 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3461691 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3461691 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3461691' 00:28:16.017 killing process with pid 3461691 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3461691 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3461691 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.017 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:19.307 00:28:19.307 real 0m50.305s 00:28:19.307 user 2m43.416s 00:28:19.307 sys 0m9.650s 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.307 ************************************ 00:28:19.307 END TEST nvmf_perf_adq 00:28:19.307 ************************************ 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:19.307 05:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:19.308 ************************************ 00:28:19.308 START TEST nvmf_shutdown 00:28:19.308 ************************************ 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:19.308 * Looking for test storage... 00:28:19.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:19.308 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.308 --rc genhtml_branch_coverage=1 00:28:19.308 --rc genhtml_function_coverage=1 00:28:19.308 --rc genhtml_legend=1 00:28:19.308 --rc geninfo_all_blocks=1 00:28:19.308 --rc geninfo_unexecuted_blocks=1 00:28:19.308 00:28:19.308 ' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.308 --rc genhtml_branch_coverage=1 00:28:19.308 --rc genhtml_function_coverage=1 00:28:19.308 --rc genhtml_legend=1 00:28:19.308 --rc geninfo_all_blocks=1 00:28:19.308 --rc geninfo_unexecuted_blocks=1 00:28:19.308 00:28:19.308 ' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.308 --rc genhtml_branch_coverage=1 00:28:19.308 --rc genhtml_function_coverage=1 00:28:19.308 --rc genhtml_legend=1 00:28:19.308 --rc geninfo_all_blocks=1 00:28:19.308 --rc geninfo_unexecuted_blocks=1 00:28:19.308 00:28:19.308 ' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:19.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.308 --rc genhtml_branch_coverage=1 00:28:19.308 --rc genhtml_function_coverage=1 00:28:19.308 --rc genhtml_legend=1 00:28:19.308 --rc geninfo_all_blocks=1 00:28:19.308 --rc geninfo_unexecuted_blocks=1 00:28:19.308 00:28:19.308 ' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:19.308 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.309 ************************************ 00:28:19.309 START TEST nvmf_shutdown_tc1 00:28:19.309 ************************************ 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.309 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:24.572 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:24.572 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:24.572 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:24.573 Found net devices under 0000:af:00.0: cvl_0_0 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:24.573 Found net devices under 0000:af:00.1: cvl_0_1 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.573 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.830 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.830 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.830 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.830 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.830 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.830 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:28:24.830 00:28:24.830 --- 10.0.0.2 ping statistics --- 00:28:24.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.830 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:28:24.831 00:28:24.831 --- 10.0.0.1 ping statistics --- 00:28:24.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.831 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=3467087 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 3467087 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3467087 ']' 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.831 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.831 [2024-12-16 05:56:58.638375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:24.831 [2024-12-16 05:56:58.638420] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.089 [2024-12-16 05:56:58.698672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.089 [2024-12-16 05:56:58.739330] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.089 [2024-12-16 05:56:58.739367] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.089 [2024-12-16 05:56:58.739377] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.089 [2024-12-16 05:56:58.739384] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.089 [2024-12-16 05:56:58.739390] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.089 [2024-12-16 05:56:58.739496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.089 [2024-12-16 05:56:58.739586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.089 [2024-12-16 05:56:58.739693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.089 [2024-12-16 05:56:58.739693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.089 [2024-12-16 05:56:58.878291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.089 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.347 Malloc1 00:28:25.347 [2024-12-16 05:56:58.978008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.347 Malloc2 00:28:25.347 Malloc3 00:28:25.347 Malloc4 00:28:25.347 Malloc5 00:28:25.347 Malloc6 00:28:25.606 Malloc7 00:28:25.606 Malloc8 00:28:25.606 Malloc9 00:28:25.606 Malloc10 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3467204 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3467204 /var/tmp/bdevperf.sock 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3467204 ']' 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:25.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.606 { 00:28:25.606 "params": { 00:28:25.606 "name": "Nvme$subsystem", 00:28:25.606 "trtype": "$TEST_TRANSPORT", 00:28:25.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.606 "adrfam": "ipv4", 00:28:25.606 "trsvcid": "$NVMF_PORT", 00:28:25.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.606 "hdgst": ${hdgst:-false}, 00:28:25.606 "ddgst": ${ddgst:-false} 00:28:25.606 }, 00:28:25.606 "method": "bdev_nvme_attach_controller" 00:28:25.606 } 00:28:25.606 EOF 00:28:25.606 )") 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.606 { 00:28:25.606 "params": { 00:28:25.606 "name": "Nvme$subsystem", 00:28:25.606 "trtype": "$TEST_TRANSPORT", 00:28:25.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.606 "adrfam": "ipv4", 00:28:25.606 "trsvcid": "$NVMF_PORT", 00:28:25.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.606 "hdgst": ${hdgst:-false}, 00:28:25.606 "ddgst": ${ddgst:-false} 00:28:25.606 }, 00:28:25.606 "method": "bdev_nvme_attach_controller" 00:28:25.606 } 00:28:25.606 EOF 00:28:25.606 )") 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.606 { 00:28:25.606 "params": { 00:28:25.606 "name": "Nvme$subsystem", 00:28:25.606 "trtype": "$TEST_TRANSPORT", 00:28:25.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.606 "adrfam": "ipv4", 00:28:25.606 "trsvcid": "$NVMF_PORT", 00:28:25.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.606 "hdgst": ${hdgst:-false}, 00:28:25.606 "ddgst": ${ddgst:-false} 00:28:25.606 }, 00:28:25.606 "method": "bdev_nvme_attach_controller" 00:28:25.606 } 00:28:25.606 EOF 00:28:25.606 )") 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.606 { 00:28:25.606 "params": { 00:28:25.606 "name": "Nvme$subsystem", 00:28:25.606 "trtype": "$TEST_TRANSPORT", 00:28:25.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.606 "adrfam": "ipv4", 00:28:25.606 "trsvcid": "$NVMF_PORT", 00:28:25.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.606 "hdgst": ${hdgst:-false}, 00:28:25.606 "ddgst": ${ddgst:-false} 00:28:25.606 }, 00:28:25.606 "method": "bdev_nvme_attach_controller" 00:28:25.606 } 00:28:25.606 EOF 00:28:25.606 )") 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.606 { 00:28:25.606 "params": { 00:28:25.606 "name": "Nvme$subsystem", 00:28:25.606 "trtype": "$TEST_TRANSPORT", 00:28:25.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.606 "adrfam": "ipv4", 00:28:25.606 "trsvcid": "$NVMF_PORT", 00:28:25.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.606 "hdgst": ${hdgst:-false}, 00:28:25.606 "ddgst": ${ddgst:-false} 00:28:25.606 }, 00:28:25.606 "method": "bdev_nvme_attach_controller" 00:28:25.606 } 00:28:25.606 EOF 00:28:25.606 )") 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.606 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.606 { 00:28:25.606 "params": { 00:28:25.606 "name": "Nvme$subsystem", 00:28:25.606 "trtype": "$TEST_TRANSPORT", 00:28:25.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.606 "adrfam": "ipv4", 00:28:25.606 "trsvcid": "$NVMF_PORT", 00:28:25.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.607 "hdgst": ${hdgst:-false}, 00:28:25.607 "ddgst": ${ddgst:-false} 00:28:25.607 }, 00:28:25.607 "method": "bdev_nvme_attach_controller" 00:28:25.607 } 00:28:25.607 EOF 00:28:25.607 )") 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.607 [2024-12-16 05:56:59.445005] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:25.607 [2024-12-16 05:56:59.445056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.607 { 00:28:25.607 "params": { 00:28:25.607 "name": "Nvme$subsystem", 00:28:25.607 "trtype": "$TEST_TRANSPORT", 00:28:25.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.607 "adrfam": "ipv4", 00:28:25.607 "trsvcid": "$NVMF_PORT", 00:28:25.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.607 "hdgst": ${hdgst:-false}, 00:28:25.607 "ddgst": ${ddgst:-false} 00:28:25.607 }, 00:28:25.607 "method": "bdev_nvme_attach_controller" 00:28:25.607 } 00:28:25.607 EOF 00:28:25.607 )") 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.607 { 00:28:25.607 "params": { 00:28:25.607 "name": "Nvme$subsystem", 00:28:25.607 "trtype": "$TEST_TRANSPORT", 00:28:25.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.607 "adrfam": "ipv4", 00:28:25.607 "trsvcid": "$NVMF_PORT", 00:28:25.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.607 "hdgst": ${hdgst:-false}, 00:28:25.607 "ddgst": ${ddgst:-false} 00:28:25.607 }, 00:28:25.607 "method": "bdev_nvme_attach_controller" 00:28:25.607 } 00:28:25.607 EOF 00:28:25.607 )") 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.607 { 00:28:25.607 "params": { 00:28:25.607 "name": "Nvme$subsystem", 00:28:25.607 "trtype": "$TEST_TRANSPORT", 00:28:25.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.607 "adrfam": "ipv4", 00:28:25.607 "trsvcid": "$NVMF_PORT", 00:28:25.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.607 "hdgst": ${hdgst:-false}, 00:28:25.607 "ddgst": ${ddgst:-false} 00:28:25.607 }, 00:28:25.607 "method": "bdev_nvme_attach_controller" 00:28:25.607 } 00:28:25.607 EOF 00:28:25.607 )") 00:28:25.607 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.865 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:25.865 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:25.865 { 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme$subsystem", 00:28:25.865 "trtype": "$TEST_TRANSPORT", 00:28:25.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "$NVMF_PORT", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.865 "hdgst": ${hdgst:-false}, 00:28:25.865 "ddgst": ${ddgst:-false} 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 } 00:28:25.865 EOF 00:28:25.865 )") 00:28:25.865 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:25.865 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:25.865 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:25.865 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme1", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme2", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme3", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme4", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme5", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme6", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme7", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme8", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme9", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 },{ 00:28:25.865 "params": { 00:28:25.865 "name": "Nvme10", 00:28:25.865 "trtype": "tcp", 00:28:25.865 "traddr": "10.0.0.2", 00:28:25.865 "adrfam": "ipv4", 00:28:25.865 "trsvcid": "4420", 00:28:25.865 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:25.865 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:25.865 "hdgst": false, 00:28:25.865 "ddgst": false 00:28:25.865 }, 00:28:25.865 "method": "bdev_nvme_attach_controller" 00:28:25.865 }' 00:28:25.865 [2024-12-16 05:56:59.504390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.865 [2024-12-16 05:56:59.543115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3467204 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:27.763 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:28.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3467204 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3467087 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.697 { 00:28:28.697 "params": { 00:28:28.697 "name": "Nvme$subsystem", 00:28:28.697 "trtype": "$TEST_TRANSPORT", 00:28:28.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.697 "adrfam": "ipv4", 00:28:28.697 "trsvcid": "$NVMF_PORT", 00:28:28.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.697 "hdgst": ${hdgst:-false}, 00:28:28.697 "ddgst": ${ddgst:-false} 00:28:28.697 }, 00:28:28.697 "method": "bdev_nvme_attach_controller" 00:28:28.697 } 00:28:28.697 EOF 00:28:28.697 )") 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.697 { 00:28:28.697 "params": { 00:28:28.697 "name": "Nvme$subsystem", 00:28:28.697 "trtype": "$TEST_TRANSPORT", 00:28:28.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.697 "adrfam": "ipv4", 00:28:28.697 "trsvcid": "$NVMF_PORT", 00:28:28.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.697 "hdgst": ${hdgst:-false}, 00:28:28.697 "ddgst": ${ddgst:-false} 00:28:28.697 }, 00:28:28.697 "method": "bdev_nvme_attach_controller" 00:28:28.697 } 00:28:28.697 EOF 00:28:28.697 )") 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.697 { 00:28:28.697 "params": { 00:28:28.697 "name": "Nvme$subsystem", 00:28:28.697 "trtype": "$TEST_TRANSPORT", 00:28:28.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.697 "adrfam": "ipv4", 00:28:28.697 "trsvcid": "$NVMF_PORT", 00:28:28.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.697 "hdgst": ${hdgst:-false}, 00:28:28.697 "ddgst": ${ddgst:-false} 00:28:28.697 }, 00:28:28.697 "method": "bdev_nvme_attach_controller" 00:28:28.697 } 00:28:28.697 EOF 00:28:28.697 )") 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.697 { 00:28:28.697 "params": { 00:28:28.697 "name": "Nvme$subsystem", 00:28:28.697 "trtype": "$TEST_TRANSPORT", 00:28:28.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.697 "adrfam": "ipv4", 00:28:28.697 "trsvcid": "$NVMF_PORT", 00:28:28.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.697 "hdgst": ${hdgst:-false}, 00:28:28.697 "ddgst": ${ddgst:-false} 00:28:28.697 }, 00:28:28.697 "method": "bdev_nvme_attach_controller" 00:28:28.697 } 00:28:28.697 EOF 00:28:28.697 )") 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.697 { 00:28:28.697 "params": { 00:28:28.697 "name": "Nvme$subsystem", 00:28:28.697 "trtype": "$TEST_TRANSPORT", 00:28:28.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.697 "adrfam": "ipv4", 00:28:28.697 "trsvcid": "$NVMF_PORT", 00:28:28.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.697 "hdgst": ${hdgst:-false}, 00:28:28.697 "ddgst": ${ddgst:-false} 00:28:28.697 }, 00:28:28.697 "method": "bdev_nvme_attach_controller" 00:28:28.697 } 00:28:28.697 EOF 00:28:28.697 )") 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.697 { 00:28:28.697 "params": { 00:28:28.697 "name": "Nvme$subsystem", 00:28:28.697 "trtype": "$TEST_TRANSPORT", 00:28:28.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.697 "adrfam": "ipv4", 00:28:28.697 "trsvcid": "$NVMF_PORT", 00:28:28.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.697 "hdgst": ${hdgst:-false}, 00:28:28.697 "ddgst": ${ddgst:-false} 00:28:28.697 }, 00:28:28.697 "method": "bdev_nvme_attach_controller" 00:28:28.697 } 00:28:28.697 EOF 00:28:28.697 )") 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.697 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.697 { 00:28:28.697 "params": { 00:28:28.697 "name": "Nvme$subsystem", 00:28:28.697 "trtype": "$TEST_TRANSPORT", 00:28:28.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.697 "adrfam": "ipv4", 00:28:28.697 "trsvcid": "$NVMF_PORT", 00:28:28.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.697 "hdgst": ${hdgst:-false}, 00:28:28.697 "ddgst": ${ddgst:-false} 00:28:28.697 }, 00:28:28.697 "method": "bdev_nvme_attach_controller" 00:28:28.697 } 00:28:28.697 EOF 00:28:28.697 )") 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.698 [2024-12-16 05:57:02.391536] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:28.698 [2024-12-16 05:57:02.391581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467804 ] 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.698 { 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme$subsystem", 00:28:28.698 "trtype": "$TEST_TRANSPORT", 00:28:28.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "$NVMF_PORT", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.698 "hdgst": ${hdgst:-false}, 00:28:28.698 "ddgst": ${ddgst:-false} 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 } 00:28:28.698 EOF 00:28:28.698 )") 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.698 { 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme$subsystem", 00:28:28.698 "trtype": "$TEST_TRANSPORT", 00:28:28.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "$NVMF_PORT", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.698 "hdgst": ${hdgst:-false}, 00:28:28.698 "ddgst": ${ddgst:-false} 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 } 00:28:28.698 EOF 00:28:28.698 )") 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:28.698 { 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme$subsystem", 00:28:28.698 "trtype": "$TEST_TRANSPORT", 00:28:28.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "$NVMF_PORT", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.698 "hdgst": ${hdgst:-false}, 00:28:28.698 "ddgst": ${ddgst:-false} 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 } 00:28:28.698 EOF 00:28:28.698 )") 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:28.698 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme1", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme2", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme3", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme4", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme5", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme6", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme7", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme8", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme9", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 },{ 00:28:28.698 "params": { 00:28:28.698 "name": "Nvme10", 00:28:28.698 "trtype": "tcp", 00:28:28.698 "traddr": "10.0.0.2", 00:28:28.698 "adrfam": "ipv4", 00:28:28.698 "trsvcid": "4420", 00:28:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:28.698 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:28.698 "hdgst": false, 00:28:28.698 "ddgst": false 00:28:28.698 }, 00:28:28.698 "method": "bdev_nvme_attach_controller" 00:28:28.698 }' 00:28:28.698 [2024-12-16 05:57:02.449075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.698 [2024-12-16 05:57:02.488181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.594 Running I/O for 1 seconds... 00:28:31.415 2244.00 IOPS, 140.25 MiB/s 00:28:31.415 Latency(us) 00:28:31.415 [2024-12-16T04:57:05.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme1n1 : 1.14 281.44 17.59 0.00 0.00 225474.85 16852.11 213709.78 00:28:31.415 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme2n1 : 1.07 240.30 15.02 0.00 0.00 260153.54 17850.76 225693.50 00:28:31.415 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme3n1 : 1.12 289.95 18.12 0.00 0.00 211235.32 5586.16 213709.78 00:28:31.415 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme4n1 : 1.13 282.04 17.63 0.00 0.00 215668.83 27462.70 206719.27 00:28:31.415 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme5n1 : 1.15 279.16 17.45 0.00 0.00 214917.12 17601.10 213709.78 00:28:31.415 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme6n1 : 1.14 280.26 17.52 0.00 0.00 210880.80 16976.94 231685.36 00:28:31.415 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme7n1 : 1.13 283.65 17.73 0.00 0.00 205020.60 22469.49 208716.56 00:28:31.415 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.415 Nvme8n1 : 1.12 285.50 17.84 0.00 0.00 200182.44 13419.28 211712.49 00:28:31.415 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.415 Verification LBA range: start 0x0 length 0x400 00:28:31.416 Nvme9n1 : 1.15 278.32 17.39 0.00 0.00 203001.66 18225.25 224694.86 00:28:31.416 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.416 Verification LBA range: start 0x0 length 0x400 00:28:31.416 Nvme10n1 : 1.15 278.77 17.42 0.00 0.00 199580.23 12483.05 232684.01 00:28:31.416 [2024-12-16T04:57:05.272Z] =================================================================================================================== 00:28:31.416 [2024-12-16T04:57:05.272Z] Total : 2779.40 173.71 0.00 0.00 213678.99 5586.16 232684.01 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:31.673 rmmod nvme_tcp 00:28:31.673 rmmod nvme_fabrics 00:28:31.673 rmmod nvme_keyring 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 3467087 ']' 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 3467087 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3467087 ']' 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3467087 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467087 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467087' 00:28:31.673 killing process with pid 3467087 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3467087 00:28:31.673 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3467087 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.239 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:34.142 00:28:34.142 real 0m14.816s 00:28:34.142 user 0m34.023s 00:28:34.142 sys 0m5.446s 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.142 ************************************ 00:28:34.142 END TEST nvmf_shutdown_tc1 00:28:34.142 ************************************ 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:34.142 ************************************ 00:28:34.142 START TEST nvmf_shutdown_tc2 00:28:34.142 ************************************ 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:34.142 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:34.142 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:34.142 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:34.143 Found net devices under 0000:af:00.0: cvl_0_0 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:34.143 Found net devices under 0000:af:00.1: cvl_0_1 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.143 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:28:34.402 00:28:34.402 --- 10.0.0.2 ping statistics --- 00:28:34.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.402 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:28:34.402 00:28:34.402 --- 10.0.0.1 ping statistics --- 00:28:34.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.402 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:34.402 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:34.660 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3469242 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3469242 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3469242 ']' 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.661 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.661 [2024-12-16 05:57:08.335713] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:34.661 [2024-12-16 05:57:08.335753] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.661 [2024-12-16 05:57:08.395007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.661 [2024-12-16 05:57:08.435113] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.661 [2024-12-16 05:57:08.435151] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.661 [2024-12-16 05:57:08.435161] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.661 [2024-12-16 05:57:08.435168] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.661 [2024-12-16 05:57:08.435175] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.661 [2024-12-16 05:57:08.435279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.661 [2024-12-16 05:57:08.435367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.661 [2024-12-16 05:57:08.435475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.661 [2024-12-16 05:57:08.435476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.919 [2024-12-16 05:57:08.576582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.919 05:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.919 Malloc1 00:28:34.919 [2024-12-16 05:57:08.671809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.919 Malloc2 00:28:34.919 Malloc3 00:28:35.177 Malloc4 00:28:35.177 Malloc5 00:28:35.177 Malloc6 00:28:35.177 Malloc7 00:28:35.177 Malloc8 00:28:35.177 Malloc9 00:28:35.436 Malloc10 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3469467 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3469467 /var/tmp/bdevperf.sock 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3469467 ']' 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:35.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.436 { 00:28:35.436 "params": { 00:28:35.436 "name": "Nvme$subsystem", 00:28:35.436 "trtype": "$TEST_TRANSPORT", 00:28:35.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.436 "adrfam": "ipv4", 00:28:35.436 "trsvcid": "$NVMF_PORT", 00:28:35.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.436 "hdgst": ${hdgst:-false}, 00:28:35.436 "ddgst": ${ddgst:-false} 00:28:35.436 }, 00:28:35.436 "method": "bdev_nvme_attach_controller" 00:28:35.436 } 00:28:35.436 EOF 00:28:35.436 )") 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.436 { 00:28:35.436 "params": { 00:28:35.436 "name": "Nvme$subsystem", 00:28:35.436 "trtype": "$TEST_TRANSPORT", 00:28:35.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.436 "adrfam": "ipv4", 00:28:35.436 "trsvcid": "$NVMF_PORT", 00:28:35.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.436 "hdgst": ${hdgst:-false}, 00:28:35.436 "ddgst": ${ddgst:-false} 00:28:35.436 }, 00:28:35.436 "method": "bdev_nvme_attach_controller" 00:28:35.436 } 00:28:35.436 EOF 00:28:35.436 )") 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.436 { 00:28:35.436 "params": { 00:28:35.436 "name": "Nvme$subsystem", 00:28:35.436 "trtype": "$TEST_TRANSPORT", 00:28:35.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.436 "adrfam": "ipv4", 00:28:35.436 "trsvcid": "$NVMF_PORT", 00:28:35.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.436 "hdgst": ${hdgst:-false}, 00:28:35.436 "ddgst": ${ddgst:-false} 00:28:35.436 }, 00:28:35.436 "method": "bdev_nvme_attach_controller" 00:28:35.436 } 00:28:35.436 EOF 00:28:35.436 )") 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.436 { 00:28:35.436 "params": { 00:28:35.436 "name": "Nvme$subsystem", 00:28:35.436 "trtype": "$TEST_TRANSPORT", 00:28:35.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.436 "adrfam": "ipv4", 00:28:35.436 "trsvcid": "$NVMF_PORT", 00:28:35.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.436 "hdgst": ${hdgst:-false}, 00:28:35.436 "ddgst": ${ddgst:-false} 00:28:35.436 }, 00:28:35.436 "method": "bdev_nvme_attach_controller" 00:28:35.436 } 00:28:35.436 EOF 00:28:35.436 )") 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.436 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.436 { 00:28:35.436 "params": { 00:28:35.436 "name": "Nvme$subsystem", 00:28:35.436 "trtype": "$TEST_TRANSPORT", 00:28:35.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "$NVMF_PORT", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.437 "hdgst": ${hdgst:-false}, 00:28:35.437 "ddgst": ${ddgst:-false} 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 } 00:28:35.437 EOF 00:28:35.437 )") 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.437 [2024-12-16 05:57:09.138906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:35.437 [2024-12-16 05:57:09.138953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469467 ] 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.437 { 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme$subsystem", 00:28:35.437 "trtype": "$TEST_TRANSPORT", 00:28:35.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "$NVMF_PORT", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.437 "hdgst": ${hdgst:-false}, 00:28:35.437 "ddgst": ${ddgst:-false} 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 } 00:28:35.437 EOF 00:28:35.437 )") 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.437 { 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme$subsystem", 00:28:35.437 "trtype": "$TEST_TRANSPORT", 00:28:35.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "$NVMF_PORT", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.437 "hdgst": ${hdgst:-false}, 00:28:35.437 "ddgst": ${ddgst:-false} 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 } 00:28:35.437 EOF 00:28:35.437 )") 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.437 { 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme$subsystem", 00:28:35.437 "trtype": "$TEST_TRANSPORT", 00:28:35.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "$NVMF_PORT", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.437 "hdgst": ${hdgst:-false}, 00:28:35.437 "ddgst": ${ddgst:-false} 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 } 00:28:35.437 EOF 00:28:35.437 )") 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.437 { 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme$subsystem", 00:28:35.437 "trtype": "$TEST_TRANSPORT", 00:28:35.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "$NVMF_PORT", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.437 "hdgst": ${hdgst:-false}, 00:28:35.437 "ddgst": ${ddgst:-false} 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 } 00:28:35.437 EOF 00:28:35.437 )") 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:35.437 { 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme$subsystem", 00:28:35.437 "trtype": "$TEST_TRANSPORT", 00:28:35.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "$NVMF_PORT", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.437 "hdgst": ${hdgst:-false}, 00:28:35.437 "ddgst": ${ddgst:-false} 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 } 00:28:35.437 EOF 00:28:35.437 )") 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:28:35.437 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme1", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme2", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme3", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme4", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme5", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme6", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme7", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme8", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:35.437 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:35.437 "hdgst": false, 00:28:35.437 "ddgst": false 00:28:35.437 }, 00:28:35.437 "method": "bdev_nvme_attach_controller" 00:28:35.437 },{ 00:28:35.437 "params": { 00:28:35.437 "name": "Nvme9", 00:28:35.437 "trtype": "tcp", 00:28:35.437 "traddr": "10.0.0.2", 00:28:35.437 "adrfam": "ipv4", 00:28:35.437 "trsvcid": "4420", 00:28:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:35.438 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:35.438 "hdgst": false, 00:28:35.438 "ddgst": false 00:28:35.438 }, 00:28:35.438 "method": "bdev_nvme_attach_controller" 00:28:35.438 },{ 00:28:35.438 "params": { 00:28:35.438 "name": "Nvme10", 00:28:35.438 "trtype": "tcp", 00:28:35.438 "traddr": "10.0.0.2", 00:28:35.438 "adrfam": "ipv4", 00:28:35.438 "trsvcid": "4420", 00:28:35.438 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:35.438 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:35.438 "hdgst": false, 00:28:35.438 "ddgst": false 00:28:35.438 }, 00:28:35.438 "method": "bdev_nvme_attach_controller" 00:28:35.438 }' 00:28:35.438 [2024-12-16 05:57:09.196526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.438 [2024-12-16 05:57:09.235256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.807 Running I/O for 10 seconds... 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3469467 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3469467 ']' 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3469467 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3469467 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3469467' 00:28:37.373 killing process with pid 3469467 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3469467 00:28:37.373 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3469467 00:28:37.373 Received shutdown signal, test time was about 0.633025 seconds 00:28:37.373 00:28:37.373 Latency(us) 00:28:37.373 [2024-12-16T04:57:11.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.374 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme1n1 : 0.61 321.98 20.12 0.00 0.00 194796.18 3370.42 208716.56 00:28:37.374 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme2n1 : 0.63 305.81 19.11 0.00 0.00 200896.77 17351.44 176759.95 00:28:37.374 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme3n1 : 0.62 311.66 19.48 0.00 0.00 191851.60 13107.20 215707.06 00:28:37.374 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme4n1 : 0.62 308.62 19.29 0.00 0.00 188978.87 12982.37 213709.78 00:28:37.374 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme5n1 : 0.63 304.76 19.05 0.00 0.00 186303.31 17101.78 195734.19 00:28:37.374 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme6n1 : 0.60 214.43 13.40 0.00 0.00 255126.67 17101.78 211712.49 00:28:37.374 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme7n1 : 0.62 307.21 19.20 0.00 0.00 174529.91 36700.16 161780.30 00:28:37.374 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme8n1 : 0.63 298.87 18.68 0.00 0.00 173707.88 12483.05 216705.71 00:28:37.374 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme9n1 : 0.60 211.92 13.25 0.00 0.00 235557.06 36200.84 212711.13 00:28:37.374 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.374 Verification LBA range: start 0x0 length 0x400 00:28:37.374 Nvme10n1 : 0.61 210.78 13.17 0.00 0.00 229981.14 21221.18 228689.43 00:28:37.374 [2024-12-16T04:57:11.230Z] =================================================================================================================== 00:28:37.374 [2024-12-16T04:57:11.230Z] Total : 2796.05 174.75 0.00 0.00 199088.06 3370.42 228689.43 00:28:37.631 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3469242 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.004 rmmod nvme_tcp 00:28:39.004 rmmod nvme_fabrics 00:28:39.004 rmmod nvme_keyring 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 3469242 ']' 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 3469242 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3469242 ']' 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3469242 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3469242 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3469242' 00:28:39.004 killing process with pid 3469242 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3469242 00:28:39.004 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3469242 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.263 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.164 05:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.164 00:28:41.164 real 0m7.058s 00:28:41.164 user 0m20.077s 00:28:41.164 sys 0m1.248s 00:28:41.164 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:41.164 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.164 ************************************ 00:28:41.164 END TEST nvmf_shutdown_tc2 00:28:41.164 ************************************ 00:28:41.425 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:41.425 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 ************************************ 00:28:41.426 START TEST nvmf_shutdown_tc3 00:28:41.426 ************************************ 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:41.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:41.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:41.426 Found net devices under 0000:af:00.0: cvl_0_0 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:41.426 Found net devices under 0000:af:00.1: cvl_0_1 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.426 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:28:41.685 00:28:41.685 --- 10.0.0.2 ping statistics --- 00:28:41.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.685 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:28:41.685 00:28:41.685 --- 10.0.0.1 ping statistics --- 00:28:41.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.685 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3470529 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3470529 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3470529 ']' 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.685 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:41.685 [2024-12-16 05:57:15.379583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:41.685 [2024-12-16 05:57:15.379628] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.686 [2024-12-16 05:57:15.439419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.686 [2024-12-16 05:57:15.479598] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.686 [2024-12-16 05:57:15.479637] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.686 [2024-12-16 05:57:15.479647] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.686 [2024-12-16 05:57:15.479655] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.686 [2024-12-16 05:57:15.479661] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.686 [2024-12-16 05:57:15.479771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.686 [2024-12-16 05:57:15.479870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.686 [2024-12-16 05:57:15.479978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:41.686 [2024-12-16 05:57:15.479978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.944 [2024-12-16 05:57:15.626464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.944 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.944 Malloc1 00:28:41.944 [2024-12-16 05:57:15.726092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.944 Malloc2 00:28:41.944 Malloc3 00:28:42.257 Malloc4 00:28:42.257 Malloc5 00:28:42.257 Malloc6 00:28:42.257 Malloc7 00:28:42.257 Malloc8 00:28:42.257 Malloc9 00:28:42.539 Malloc10 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3470771 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3470771 /var/tmp/bdevperf.sock 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3470771 ']' 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:42.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 [2024-12-16 05:57:16.199288] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:42.539 [2024-12-16 05:57:16.199336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470771 ] 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.539 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 } 00:28:42.540 EOF 00:28:42.540 )") 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.540 { 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme$subsystem", 00:28:42.540 "trtype": "$TEST_TRANSPORT", 00:28:42.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "$NVMF_PORT", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.540 "hdgst": ${hdgst:-false}, 00:28:42.540 "ddgst": ${ddgst:-false} 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 } 00:28:42.540 EOF 00:28:42.540 )") 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:42.540 { 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme$subsystem", 00:28:42.540 "trtype": "$TEST_TRANSPORT", 00:28:42.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "$NVMF_PORT", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.540 "hdgst": ${hdgst:-false}, 00:28:42.540 "ddgst": ${ddgst:-false} 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 } 00:28:42.540 EOF 00:28:42.540 )") 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:28:42.540 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme1", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme2", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme3", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme4", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme5", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme6", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme7", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme8", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme9", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 },{ 00:28:42.540 "params": { 00:28:42.540 "name": "Nvme10", 00:28:42.540 "trtype": "tcp", 00:28:42.540 "traddr": "10.0.0.2", 00:28:42.540 "adrfam": "ipv4", 00:28:42.540 "trsvcid": "4420", 00:28:42.540 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:42.540 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:42.540 "hdgst": false, 00:28:42.540 "ddgst": false 00:28:42.540 }, 00:28:42.540 "method": "bdev_nvme_attach_controller" 00:28:42.540 }' 00:28:42.540 [2024-12-16 05:57:16.256384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.540 [2024-12-16 05:57:16.295290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.964 Running I/O for 10 seconds... 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:44.531 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:44.808 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3470529 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3470529 ']' 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3470529 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3470529 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3470529' 00:28:44.809 killing process with pid 3470529 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3470529 00:28:44.809 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3470529 00:28:44.809 [2024-12-16 05:57:18.547271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.547705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2960 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.809 [2024-12-16 05:57:18.548752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.548998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.549073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5510 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.810 [2024-12-16 05:57:18.551448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.551598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3300 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.552992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.811 [2024-12-16 05:57:18.553253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.553262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.553271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e37f0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.554744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e41b0 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.812 [2024-12-16 05:57:18.555814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.555996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.556083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4680 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.813 [2024-12-16 05:57:18.557322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.557527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5020 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.570829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.570877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.570888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.570896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.570904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.570912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.570925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.570932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.570939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14863a0 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.570974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.570983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.570990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.570997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b2570 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.571057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147c3d0 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.571141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da070 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.571227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d9d50 is same with the state(6) to be set 00:28:44.814 [2024-12-16 05:57:18.571308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.814 [2024-12-16 05:57:18.571338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.814 [2024-12-16 05:57:18.571344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392610 is same with the state(6) to be set 00:28:44.815 [2024-12-16 05:57:18.571390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f90 is same with the state(6) to be set 00:28:44.815 [2024-12-16 05:57:18.571473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14842d0 is same with the state(6) to be set 00:28:44.815 [2024-12-16 05:57:18.571555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486800 is same with the state(6) to be set 00:28:44.815 [2024-12-16 05:57:18.571636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.815 [2024-12-16 05:57:18.571687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.571694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147b6c0 is same with the state(6) to be set 00:28:44.815 [2024-12-16 05:57:18.588616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.588987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.815 [2024-12-16 05:57:18.588994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-12-16 05:57:18.589002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-12-16 05:57:18.589615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-12-16 05:57:18.589623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1887950 is same with the state(6) to be set 00:28:44.817 [2024-12-16 05:57:18.589722] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1887950 was disconnected and freed. reset controller. 00:28:44.817 [2024-12-16 05:57:18.589882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.589981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.589990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-12-16 05:57:18.590387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-12-16 05:57:18.590396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.590894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.590962] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x188b8e0 was disconnected and freed. reset controller. 00:28:44.818 [2024-12-16 05:57:18.591141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14863a0 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b2570 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147c3d0 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18da070 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d9d50 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392610 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f3f90 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14842d0 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1486800 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147b6c0 (9): Bad file descriptor 00:28:44.818 [2024-12-16 05:57:18.591381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.591392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.591405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.591412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.591420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.591427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-12-16 05:57:18.591438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-12-16 05:57:18.591445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.591990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.591997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.592005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.592011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.592019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.592027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.592035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.592041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.592049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.592056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.592064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-12-16 05:57:18.592071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-12-16 05:57:18.592081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.592088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.592097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.592108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.592117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.592124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.592133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.592140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.592148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.592155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.592163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.592170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.592181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.592187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.592195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.599985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.599994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974790 is same with the state(6) to be set 00:28:44.820 [2024-12-16 05:57:18.600073] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1974790 was disconnected and freed. reset controller. 00:28:44.820 [2024-12-16 05:57:18.600108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-12-16 05:57:18.600387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-16 05:57:18.600394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-12-16 05:57:18.600967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-16 05:57:18.600973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.600982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.600989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.600998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.601005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.601013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.601019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.601027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.601034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.601042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.601048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.601129] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x168b410 was disconnected and freed. reset controller. 00:28:44.822 [2024-12-16 05:57:18.603307] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.822 [2024-12-16 05:57:18.603343] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.822 [2024-12-16 05:57:18.603368] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.822 [2024-12-16 05:57:18.603379] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.822 [2024-12-16 05:57:18.605867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:44.822 [2024-12-16 05:57:18.606065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-12-16 05:57:18.606664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.822 [2024-12-16 05:57:18.606672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.606987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.606995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.607342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.607351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.609018] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.823 [2024-12-16 05:57:18.609413] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.823 [2024-12-16 05:57:18.609468] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:44.823 [2024-12-16 05:57:18.609515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:44.823 [2024-12-16 05:57:18.609718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.823 [2024-12-16 05:57:18.609735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147c3d0 with addr=10.0.0.2, port=4420 00:28:44.823 [2024-12-16 05:57:18.609747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147c3d0 is same with the state(6) to be set 00:28:44.823 [2024-12-16 05:57:18.609823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.609835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.609857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.609868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.609879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.823 [2024-12-16 05:57:18.609892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.823 [2024-12-16 05:57:18.609903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.609912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.609924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.609933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.609944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.609953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.609964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.609973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.609984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.609992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.824 [2024-12-16 05:57:18.610690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.824 [2024-12-16 05:57:18.610698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.610987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.610998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.611006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.611017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.611025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.611036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.611044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.611055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.611065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.611076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.611085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.611096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.611105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.825 [2024-12-16 05:57:18.613459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.825 [2024-12-16 05:57:18.613469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.613984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.613993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.826 [2024-12-16 05:57:18.614190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.826 [2024-12-16 05:57:18.614200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.614347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.614357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1888e80 is same with the state(6) to be set 00:28:44.827 [2024-12-16 05:57:18.615542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.615988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.615994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.616002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.616008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.616016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.616023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.827 [2024-12-16 05:57:18.616031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.827 [2024-12-16 05:57:18.616037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.616338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.616344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.623991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.623999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.624005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.624013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.624020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.828 [2024-12-16 05:57:18.625300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.828 [2024-12-16 05:57:18.625311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.625986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.625995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.626005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.626014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.626024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.626033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.626043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.626052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.626063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.626071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.626084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.626092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.829 [2024-12-16 05:57:18.626103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.829 [2024-12-16 05:57:18.626112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.626381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.626391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188cdb0 is same with the state(6) to be set 00:28:44.830 [2024-12-16 05:57:18.627778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.627989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.627998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.830 [2024-12-16 05:57:18.628234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.830 [2024-12-16 05:57:18.628243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.628980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.628990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.629003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.629012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.629023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.629034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.629047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.629057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.629070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.629078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.629089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.831 [2024-12-16 05:57:18.629098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.831 [2024-12-16 05:57:18.629110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.832 [2024-12-16 05:57:18.629119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.832 [2024-12-16 05:57:18.629132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.832 [2024-12-16 05:57:18.629141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.832 [2024-12-16 05:57:18.629151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e330 is same with the state(6) to be set 00:28:44.832 [2024-12-16 05:57:18.630544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:44.832 [2024-12-16 05:57:18.630564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:44.832 [2024-12-16 05:57:18.630578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:44.832 [2024-12-16 05:57:18.630590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.832 [2024-12-16 05:57:18.630603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:44.832 [2024-12-16 05:57:18.630878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.832 [2024-12-16 05:57:18.630898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14863a0 with addr=10.0.0.2, port=4420 00:28:44.832 [2024-12-16 05:57:18.630911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14863a0 is same with the state(6) to be set 00:28:44.832 [2024-12-16 05:57:18.630930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147c3d0 (9): Bad file descriptor 00:28:44.832 [2024-12-16 05:57:18.630954] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.832 [2024-12-16 05:57:18.630969] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.832 [2024-12-16 05:57:18.630986] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.832 [2024-12-16 05:57:18.631019] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:44.832 [2024-12-16 05:57:18.631032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14863a0 (9): Bad file descriptor 00:28:44.832 [2024-12-16 05:57:18.631418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:44.832 [2024-12-16 05:57:18.631436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:45.091 task offset: 24576 on job bdev=Nvme5n1 fails 00:28:45.091 00:28:45.091 Latency(us) 00:28:45.091 [2024-12-16T04:57:18.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.091 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.091 Job: Nvme1n1 ended in about 0.84 seconds with error 00:28:45.091 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme1n1 : 0.84 234.16 14.64 76.46 0.00 203858.49 15978.30 207717.91 00:28:45.092 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme2n1 ended in about 0.83 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme2n1 : 0.83 231.53 14.47 77.18 0.00 201295.60 15354.15 231685.36 00:28:45.092 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme3n1 ended in about 0.83 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme3n1 : 0.83 231.26 14.45 77.09 0.00 197606.64 15603.81 211712.49 00:28:45.092 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme4n1 ended in about 0.83 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme4n1 : 0.83 235.21 14.70 76.80 0.00 191529.08 14605.17 216705.71 00:28:45.092 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme5n1 ended in about 0.83 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme5n1 : 0.83 232.15 14.51 77.38 0.00 189086.96 33204.91 213709.78 00:28:45.092 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme6n1 ended in about 0.84 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme6n1 : 0.84 228.50 14.28 76.17 0.00 188534.00 18599.74 208716.56 00:28:45.092 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme7n1 ended in about 0.85 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme7n1 : 0.85 150.62 9.41 75.31 0.00 249546.36 12982.37 233682.65 00:28:45.092 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme8n1 ended in about 0.83 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme8n1 : 0.83 231.86 14.49 77.29 0.00 177866.48 13606.52 211712.49 00:28:45.092 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme9n1 ended in about 0.85 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme9n1 : 0.85 150.19 9.39 75.09 0.00 240326.62 32206.26 221698.93 00:28:45.092 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.092 Job: Nvme10n1 ended in about 0.86 seconds with error 00:28:45.092 Verification LBA range: start 0x0 length 0x400 00:28:45.092 Nvme10n1 : 0.86 149.70 9.36 74.85 0.00 236146.43 16852.11 237677.23 00:28:45.092 [2024-12-16T04:57:18.948Z] =================================================================================================================== 00:28:45.092 [2024-12-16T04:57:18.948Z] Total : 2075.19 129.70 763.62 0.00 204764.42 12982.37 237677.23 00:28:45.092 [2024-12-16 05:57:18.662309] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:45.092 [2024-12-16 05:57:18.662358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:45.092 [2024-12-16 05:57:18.662578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.662595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f3f90 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.662606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f90 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.662827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.662851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14842d0 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.662860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14842d0 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.663062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.663074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b2570 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.663081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b2570 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.663224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.663235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1486800 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.663242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486800 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.663300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.663310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147b6c0 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.663318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147b6c0 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.663328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:45.092 [2024-12-16 05:57:18.663335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:45.092 [2024-12-16 05:57:18.663345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:45.092 [2024-12-16 05:57:18.664471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.092 [2024-12-16 05:57:18.664645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.664659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1392610 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.664669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392610 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.664889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.664901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d9d50 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.664909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d9d50 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.665041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-12-16 05:57:18.665051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18da070 with addr=10.0.0.2, port=4420 00:28:45.092 [2024-12-16 05:57:18.665059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da070 is same with the state(6) to be set 00:28:45.092 [2024-12-16 05:57:18.665073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f3f90 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14842d0 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b2570 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1486800 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147b6c0 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:45.092 [2024-12-16 05:57:18.665131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:45.092 [2024-12-16 05:57:18.665138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:45.092 [2024-12-16 05:57:18.665182] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:45.092 [2024-12-16 05:57:18.665193] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:45.092 [2024-12-16 05:57:18.665205] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:45.092 [2024-12-16 05:57:18.665214] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:45.092 [2024-12-16 05:57:18.665224] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:45.092 [2024-12-16 05:57:18.665234] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:45.092 [2024-12-16 05:57:18.665497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.092 [2024-12-16 05:57:18.665516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392610 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d9d50 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18da070 (9): Bad file descriptor 00:28:45.092 [2024-12-16 05:57:18.665544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:45.092 [2024-12-16 05:57:18.665551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:45.092 [2024-12-16 05:57:18.665559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:45.092 [2024-12-16 05:57:18.665569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:45.092 [2024-12-16 05:57:18.665577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:45.092 [2024-12-16 05:57:18.665583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:45.092 [2024-12-16 05:57:18.665593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:45.092 [2024-12-16 05:57:18.665599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:45.092 [2024-12-16 05:57:18.665605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:45.092 [2024-12-16 05:57:18.665614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.092 [2024-12-16 05:57:18.665622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.092 [2024-12-16 05:57:18.665628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.092 [2024-12-16 05:57:18.665636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:45.092 [2024-12-16 05:57:18.665644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:45.092 [2024-12-16 05:57:18.665650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:45.093 [2024-12-16 05:57:18.665707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:45.093 [2024-12-16 05:57:18.665719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:45.093 [2024-12-16 05:57:18.665765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:45.093 [2024-12-16 05:57:18.665771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:45.093 [2024-12-16 05:57:18.665781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:45.093 [2024-12-16 05:57:18.665788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:45.093 [2024-12-16 05:57:18.665793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:45.093 [2024-12-16 05:57:18.665803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:45.093 [2024-12-16 05:57:18.665809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:45.093 [2024-12-16 05:57:18.665816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:45.093 [2024-12-16 05:57:18.665838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.093 [2024-12-16 05:57:18.665944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-12-16 05:57:18.665957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147c3d0 with addr=10.0.0.2, port=4420 00:28:45.093 [2024-12-16 05:57:18.665965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147c3d0 is same with the state(6) to be set 00:28:45.093 [2024-12-16 05:57:18.665992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147c3d0 (9): Bad file descriptor 00:28:45.093 [2024-12-16 05:57:18.666016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:45.093 [2024-12-16 05:57:18.666023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:45.093 [2024-12-16 05:57:18.666030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:45.093 [2024-12-16 05:57:18.666056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.351 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:28:45.351 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 3470771 00:28:46.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 143: kill: (3470771) - No such process 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # true 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.288 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.288 rmmod nvme_tcp 00:28:46.288 rmmod nvme_fabrics 00:28:46.288 rmmod nvme_keyring 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.288 05:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.821 00:28:48.821 real 0m7.050s 00:28:48.821 user 0m16.347s 00:28:48.821 sys 0m1.259s 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:48.821 ************************************ 00:28:48.821 END TEST nvmf_shutdown_tc3 00:28:48.821 ************************************ 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ e810 == \e\8\1\0 ]] 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ tcp == \r\d\m\a ]] 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:48.821 ************************************ 00:28:48.821 START TEST nvmf_shutdown_tc4 00:28:48.821 ************************************ 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.821 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:48.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:48.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:48.822 Found net devices under 0000:af:00.0: cvl_0_0 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:48.822 Found net devices under 0000:af:00.1: cvl_0_1 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:28:48.822 00:28:48.822 --- 10.0.0.2 ping statistics --- 00:28:48.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.822 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:28:48.822 00:28:48.822 --- 10.0.0.1 ping statistics --- 00:28:48.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.822 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=3471851 00:28:48.822 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 3471851 00:28:48.823 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:48.823 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3471851 ']' 00:28:48.823 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.823 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.823 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.823 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.823 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:48.823 [2024-12-16 05:57:22.555533] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:48.823 [2024-12-16 05:57:22.555577] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.823 [2024-12-16 05:57:22.617754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.823 [2024-12-16 05:57:22.659551] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.823 [2024-12-16 05:57:22.659585] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.823 [2024-12-16 05:57:22.659595] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.823 [2024-12-16 05:57:22.659602] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.823 [2024-12-16 05:57:22.659609] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.823 [2024-12-16 05:57:22.659715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.823 [2024-12-16 05:57:22.659804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.823 [2024-12-16 05:57:22.659913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.823 [2024-12-16 05:57:22.659912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.082 [2024-12-16 05:57:22.806989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.082 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.082 Malloc1 00:28:49.082 [2024-12-16 05:57:22.906531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.082 Malloc2 00:28:49.340 Malloc3 00:28:49.340 Malloc4 00:28:49.340 Malloc5 00:28:49.340 Malloc6 00:28:49.340 Malloc7 00:28:49.340 Malloc8 00:28:49.599 Malloc9 00:28:49.599 Malloc10 00:28:49.599 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.599 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:49.599 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.599 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:49.599 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=3472065 00:28:49.599 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:28:49.599 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:49.599 [2024-12-16 05:57:23.391964] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 3471851 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3471851 ']' 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3471851 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471851 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471851' 00:28:54.872 killing process with pid 3471851 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3471851 00:28:54.872 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3471851 00:28:54.872 [2024-12-16 05:57:28.410181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.410280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46520 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.411572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46ec0 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.411600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46ec0 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.411608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46ec0 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.411620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46ec0 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.411626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46ec0 is same with the state(6) to be set 00:28:54.872 [2024-12-16 05:57:28.411632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46ec0 is same with the state(6) to be set 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 [2024-12-16 05:57:28.417137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 starting I/O failed: -6 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.872 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 [2024-12-16 05:57:28.418053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 [2024-12-16 05:57:28.419042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 [2024-12-16 05:57:28.420586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.873 NVMe io qpair process completion error 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 Write completed with error (sct=0, sc=8) 00:28:54.873 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 [2024-12-16 05:57:28.421581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 [2024-12-16 05:57:28.422465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 [2024-12-16 05:57:28.423442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.874 Write completed with error (sct=0, sc=8) 00:28:54.874 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 [2024-12-16 05:57:28.425151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.875 NVMe io qpair process completion error 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 [2024-12-16 05:57:28.426057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 [2024-12-16 05:57:28.426947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.875 starting I/O failed: -6 00:28:54.875 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 [2024-12-16 05:57:28.427934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 [2024-12-16 05:57:28.429632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.876 NVMe io qpair process completion error 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 starting I/O failed: -6 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.876 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 [2024-12-16 05:57:28.430482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 [2024-12-16 05:57:28.431349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 [2024-12-16 05:57:28.432357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.877 Write completed with error (sct=0, sc=8) 00:28:54.877 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 [2024-12-16 05:57:28.433905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.878 NVMe io qpair process completion error 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 [2024-12-16 05:57:28.434912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 [2024-12-16 05:57:28.435796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.878 starting I/O failed: -6 00:28:54.878 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 [2024-12-16 05:57:28.436781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 [2024-12-16 05:57:28.441870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.879 NVMe io qpair process completion error 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 starting I/O failed: -6 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.879 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 [2024-12-16 05:57:28.442900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 [2024-12-16 05:57:28.443754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 [2024-12-16 05:57:28.444765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.880 starting I/O failed: -6 00:28:54.880 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 [2024-12-16 05:57:28.448781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.881 NVMe io qpair process completion error 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 [2024-12-16 05:57:28.449745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 [2024-12-16 05:57:28.450618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.881 Write completed with error (sct=0, sc=8) 00:28:54.881 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 [2024-12-16 05:57:28.451644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 [2024-12-16 05:57:28.453470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.882 NVMe io qpair process completion error 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 starting I/O failed: -6 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 Write completed with error (sct=0, sc=8) 00:28:54.882 [2024-12-16 05:57:28.454379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.882 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 [2024-12-16 05:57:28.455283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 [2024-12-16 05:57:28.456291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.883 Write completed with error (sct=0, sc=8) 00:28:54.883 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 [2024-12-16 05:57:28.457990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.884 NVMe io qpair process completion error 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 [2024-12-16 05:57:28.458947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 [2024-12-16 05:57:28.459840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.884 Write completed with error (sct=0, sc=8) 00:28:54.884 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 [2024-12-16 05:57:28.460866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 [2024-12-16 05:57:28.466922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.885 NVMe io qpair process completion error 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 [2024-12-16 05:57:28.467927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 starting I/O failed: -6 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.885 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 [2024-12-16 05:57:28.468763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.886 starting I/O failed: -6 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 [2024-12-16 05:57:28.469805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.886 starting I/O failed: -6 00:28:54.886 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 Write completed with error (sct=0, sc=8) 00:28:54.887 starting I/O failed: -6 00:28:54.887 [2024-12-16 05:57:28.472247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.887 NVMe io qpair process completion error 00:28:54.887 Initializing NVMe Controllers 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:54.887 Controller IO queue size 128, less than required. 00:28:54.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:54.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:54.887 Initialization complete. Launching workers. 00:28:54.887 ======================================================== 00:28:54.887 Latency(us) 00:28:54.887 Device Information : IOPS MiB/s Average min max 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2223.53 95.54 57568.96 867.87 113773.24 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2159.87 92.81 59277.30 709.77 112591.47 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2180.53 93.69 58769.04 852.03 110601.33 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2236.38 96.09 57339.79 888.20 95439.70 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2216.99 95.26 57854.22 808.33 107439.82 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2196.97 94.40 58393.05 886.14 114398.10 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2242.71 96.37 57266.18 882.83 121104.93 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2204.56 94.73 57593.34 954.61 104615.06 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2178.63 93.61 58286.49 887.08 103985.71 00:28:54.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2183.27 93.81 58172.98 918.47 103192.47 00:28:54.887 ======================================================== 00:28:54.887 Total : 22023.44 946.32 58045.44 709.77 121104.93 00:28:54.887 00:28:54.887 [2024-12-16 05:57:28.475172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c1c0 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bb20 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c820 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60e350 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60cb50 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60e020 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60e680 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60dc40 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c4f0 is same with the state(6) to be set 00:28:54.887 [2024-12-16 05:57:28.475451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60bd00 is same with the state(6) to be set 00:28:54.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:55.146 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:28:55.146 05:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 3472065 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.083 rmmod nvme_tcp 00:28:56.083 rmmod nvme_fabrics 00:28:56.083 rmmod nvme_keyring 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.083 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.613 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.613 00:28:58.613 real 0m9.758s 00:28:58.613 user 0m24.854s 00:28:58.613 sys 0m5.184s 00:28:58.613 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.613 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.613 ************************************ 00:28:58.613 END TEST nvmf_shutdown_tc4 00:28:58.613 ************************************ 00:28:58.613 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:28:58.613 00:28:58.613 real 0m39.158s 00:28:58.613 user 1m35.508s 00:28:58.613 sys 0m13.442s 00:28:58.613 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.613 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.613 ************************************ 00:28:58.613 END TEST nvmf_shutdown 00:28:58.613 ************************************ 00:28:58.613 05:57:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:58.613 00:28:58.613 real 18m10.896s 00:28:58.613 user 49m4.018s 00:28:58.613 sys 4m23.140s 00:28:58.613 05:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.613 05:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.613 ************************************ 00:28:58.613 END TEST nvmf_target_extra 00:28:58.613 ************************************ 00:28:58.613 05:57:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:58.613 05:57:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.613 05:57:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.613 05:57:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.613 ************************************ 00:28:58.613 START TEST nvmf_host 00:28:58.613 ************************************ 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:58.613 * Looking for test storage... 00:28:58.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.613 --rc genhtml_branch_coverage=1 00:28:58.613 --rc genhtml_function_coverage=1 00:28:58.613 --rc genhtml_legend=1 00:28:58.613 --rc geninfo_all_blocks=1 00:28:58.613 --rc geninfo_unexecuted_blocks=1 00:28:58.613 00:28:58.613 ' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.613 --rc genhtml_branch_coverage=1 00:28:58.613 --rc genhtml_function_coverage=1 00:28:58.613 --rc genhtml_legend=1 00:28:58.613 --rc geninfo_all_blocks=1 00:28:58.613 --rc geninfo_unexecuted_blocks=1 00:28:58.613 00:28:58.613 ' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.613 --rc genhtml_branch_coverage=1 00:28:58.613 --rc genhtml_function_coverage=1 00:28:58.613 --rc genhtml_legend=1 00:28:58.613 --rc geninfo_all_blocks=1 00:28:58.613 --rc geninfo_unexecuted_blocks=1 00:28:58.613 00:28:58.613 ' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:58.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.613 --rc genhtml_branch_coverage=1 00:28:58.613 --rc genhtml_function_coverage=1 00:28:58.613 --rc genhtml_legend=1 00:28:58.613 --rc geninfo_all_blocks=1 00:28:58.613 --rc geninfo_unexecuted_blocks=1 00:28:58.613 00:28:58.613 ' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.613 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.614 ************************************ 00:28:58.614 START TEST nvmf_multicontroller 00:28:58.614 ************************************ 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:58.614 * Looking for test storage... 00:28:58.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:58.614 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:58.872 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.873 --rc genhtml_branch_coverage=1 00:28:58.873 --rc genhtml_function_coverage=1 00:28:58.873 --rc genhtml_legend=1 00:28:58.873 --rc geninfo_all_blocks=1 00:28:58.873 --rc geninfo_unexecuted_blocks=1 00:28:58.873 00:28:58.873 ' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.873 --rc genhtml_branch_coverage=1 00:28:58.873 --rc genhtml_function_coverage=1 00:28:58.873 --rc genhtml_legend=1 00:28:58.873 --rc geninfo_all_blocks=1 00:28:58.873 --rc geninfo_unexecuted_blocks=1 00:28:58.873 00:28:58.873 ' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.873 --rc genhtml_branch_coverage=1 00:28:58.873 --rc genhtml_function_coverage=1 00:28:58.873 --rc genhtml_legend=1 00:28:58.873 --rc geninfo_all_blocks=1 00:28:58.873 --rc geninfo_unexecuted_blocks=1 00:28:58.873 00:28:58.873 ' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.873 --rc genhtml_branch_coverage=1 00:28:58.873 --rc genhtml_function_coverage=1 00:28:58.873 --rc genhtml_legend=1 00:28:58.873 --rc geninfo_all_blocks=1 00:28:58.873 --rc geninfo_unexecuted_blocks=1 00:28:58.873 00:28:58.873 ' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.873 05:57:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.430 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:05.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:05.431 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:05.431 Found net devices under 0000:af:00.0: cvl_0_0 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:05.431 Found net devices under 0000:af:00.1: cvl_0_1 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:29:05.431 00:29:05.431 --- 10.0.0.2 ping statistics --- 00:29:05.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.431 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:29:05.431 00:29:05.431 --- 10.0.0.1 ping statistics --- 00:29:05.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.431 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=3476696 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 3476696 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3476696 ']' 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.431 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 [2024-12-16 05:57:38.474542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:05.432 [2024-12-16 05:57:38.474584] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.432 [2024-12-16 05:57:38.532095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.432 [2024-12-16 05:57:38.570257] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.432 [2024-12-16 05:57:38.570295] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.432 [2024-12-16 05:57:38.570303] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.432 [2024-12-16 05:57:38.570308] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.432 [2024-12-16 05:57:38.570314] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.432 [2024-12-16 05:57:38.570424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.432 [2024-12-16 05:57:38.570512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.432 [2024-12-16 05:57:38.570513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 [2024-12-16 05:57:38.712463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 Malloc0 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 [2024-12-16 05:57:38.772111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 [2024-12-16 05:57:38.780044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 Malloc1 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3476728 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3476728 /var/tmp/bdevperf.sock 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3476728 ']' 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.432 05:57:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 NVMe0n1 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.432 1 00:29:05.432 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.433 request: 00:29:05.433 { 00:29:05.433 "name": "NVMe0", 00:29:05.433 "trtype": "tcp", 00:29:05.433 "traddr": "10.0.0.2", 00:29:05.433 "adrfam": "ipv4", 00:29:05.433 "trsvcid": "4420", 00:29:05.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.433 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:05.433 "hostaddr": "10.0.0.1", 00:29:05.433 "prchk_reftag": false, 00:29:05.433 "prchk_guard": false, 00:29:05.433 "hdgst": false, 00:29:05.433 "ddgst": false, 00:29:05.433 "allow_unrecognized_csi": false, 00:29:05.433 "method": "bdev_nvme_attach_controller", 00:29:05.433 "req_id": 1 00:29:05.433 } 00:29:05.433 Got JSON-RPC error response 00:29:05.433 response: 00:29:05.433 { 00:29:05.433 "code": -114, 00:29:05.433 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:05.433 } 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.433 request: 00:29:05.433 { 00:29:05.433 "name": "NVMe0", 00:29:05.433 "trtype": "tcp", 00:29:05.433 "traddr": "10.0.0.2", 00:29:05.433 "adrfam": "ipv4", 00:29:05.433 "trsvcid": "4420", 00:29:05.433 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:05.433 "hostaddr": "10.0.0.1", 00:29:05.433 "prchk_reftag": false, 00:29:05.433 "prchk_guard": false, 00:29:05.433 "hdgst": false, 00:29:05.433 "ddgst": false, 00:29:05.433 "allow_unrecognized_csi": false, 00:29:05.433 "method": "bdev_nvme_attach_controller", 00:29:05.433 "req_id": 1 00:29:05.433 } 00:29:05.433 Got JSON-RPC error response 00:29:05.433 response: 00:29:05.433 { 00:29:05.433 "code": -114, 00:29:05.433 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:05.433 } 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.433 request: 00:29:05.433 { 00:29:05.433 "name": "NVMe0", 00:29:05.433 "trtype": "tcp", 00:29:05.433 "traddr": "10.0.0.2", 00:29:05.433 "adrfam": "ipv4", 00:29:05.433 "trsvcid": "4420", 00:29:05.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.433 "hostaddr": "10.0.0.1", 00:29:05.433 "prchk_reftag": false, 00:29:05.433 "prchk_guard": false, 00:29:05.433 "hdgst": false, 00:29:05.433 "ddgst": false, 00:29:05.433 "multipath": "disable", 00:29:05.433 "allow_unrecognized_csi": false, 00:29:05.433 "method": "bdev_nvme_attach_controller", 00:29:05.433 "req_id": 1 00:29:05.433 } 00:29:05.433 Got JSON-RPC error response 00:29:05.433 response: 00:29:05.433 { 00:29:05.433 "code": -114, 00:29:05.433 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:05.433 } 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.433 request: 00:29:05.433 { 00:29:05.433 "name": "NVMe0", 00:29:05.433 "trtype": "tcp", 00:29:05.433 "traddr": "10.0.0.2", 00:29:05.433 "adrfam": "ipv4", 00:29:05.433 "trsvcid": "4420", 00:29:05.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.433 "hostaddr": "10.0.0.1", 00:29:05.433 "prchk_reftag": false, 00:29:05.433 "prchk_guard": false, 00:29:05.433 "hdgst": false, 00:29:05.433 "ddgst": false, 00:29:05.433 "multipath": "failover", 00:29:05.433 "allow_unrecognized_csi": false, 00:29:05.433 "method": "bdev_nvme_attach_controller", 00:29:05.433 "req_id": 1 00:29:05.433 } 00:29:05.433 Got JSON-RPC error response 00:29:05.433 response: 00:29:05.433 { 00:29:05.433 "code": -114, 00:29:05.433 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:05.433 } 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:05.433 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:05.434 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:05.434 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.434 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.690 00:29:05.690 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.691 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:05.691 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.691 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.691 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.691 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:05.691 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.691 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.947 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:05.947 05:57:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:06.878 { 00:29:06.878 "results": [ 00:29:06.878 { 00:29:06.878 "job": "NVMe0n1", 00:29:06.878 "core_mask": "0x1", 00:29:06.878 "workload": "write", 00:29:06.878 "status": "finished", 00:29:06.878 "queue_depth": 128, 00:29:06.878 "io_size": 4096, 00:29:06.878 "runtime": 1.00703, 00:29:06.878 "iops": 25175.019612126747, 00:29:06.878 "mibps": 98.3399203598701, 00:29:06.878 "io_failed": 0, 00:29:06.878 "io_timeout": 0, 00:29:06.878 "avg_latency_us": 5076.344785346136, 00:29:06.878 "min_latency_us": 3089.554285714286, 00:29:06.878 "max_latency_us": 10860.251428571428 00:29:06.878 } 00:29:06.878 ], 00:29:06.878 "core_count": 1 00:29:06.878 } 00:29:06.878 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:06.878 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.878 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3476728 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3476728 ']' 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3476728 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3476728 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3476728' 00:29:07.136 killing process with pid 3476728 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3476728 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3476728 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.136 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.395 05:57:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:07.395 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:07.395 [2024-12-16 05:57:38.882808] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:07.395 [2024-12-16 05:57:38.882861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476728 ] 00:29:07.395 [2024-12-16 05:57:38.936481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.395 [2024-12-16 05:57:38.976884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.395 [2024-12-16 05:57:39.577314] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 092abfce-725d-431b-904d-f93191ed19f4 already exists 00:29:07.395 [2024-12-16 05:57:39.577342] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:092abfce-725d-431b-904d-f93191ed19f4 alias for bdev NVMe1n1 00:29:07.395 [2024-12-16 05:57:39.577349] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:07.395 Running I/O for 1 seconds... 00:29:07.395 25114.00 IOPS, 98.10 MiB/s 00:29:07.395 Latency(us) 00:29:07.395 [2024-12-16T04:57:41.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.395 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:07.395 NVMe0n1 : 1.01 25175.02 98.34 0.00 0.00 5076.34 3089.55 10860.25 00:29:07.395 [2024-12-16T04:57:41.251Z] =================================================================================================================== 00:29:07.395 [2024-12-16T04:57:41.251Z] Total : 25175.02 98.34 0.00 0.00 5076.34 3089.55 10860.25 00:29:07.395 Received shutdown signal, test time was about 1.000000 seconds 00:29:07.395 00:29:07.395 Latency(us) 00:29:07.395 [2024-12-16T04:57:41.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.395 [2024-12-16T04:57:41.251Z] =================================================================================================================== 00:29:07.395 [2024-12-16T04:57:41.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.395 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.395 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.396 rmmod nvme_tcp 00:29:07.396 rmmod nvme_fabrics 00:29:07.396 rmmod nvme_keyring 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 3476696 ']' 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 3476696 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3476696 ']' 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3476696 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3476696 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3476696' 00:29:07.396 killing process with pid 3476696 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3476696 00:29:07.396 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3476696 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.654 05:57:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.187 00:29:10.187 real 0m11.085s 00:29:10.187 user 0m12.263s 00:29:10.187 sys 0m5.007s 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.187 ************************************ 00:29:10.187 END TEST nvmf_multicontroller 00:29:10.187 ************************************ 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.187 ************************************ 00:29:10.187 START TEST nvmf_aer 00:29:10.187 ************************************ 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:10.187 * Looking for test storage... 00:29:10.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:10.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.187 --rc genhtml_branch_coverage=1 00:29:10.187 --rc genhtml_function_coverage=1 00:29:10.187 --rc genhtml_legend=1 00:29:10.187 --rc geninfo_all_blocks=1 00:29:10.187 --rc geninfo_unexecuted_blocks=1 00:29:10.187 00:29:10.187 ' 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:10.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.187 --rc genhtml_branch_coverage=1 00:29:10.187 --rc genhtml_function_coverage=1 00:29:10.187 --rc genhtml_legend=1 00:29:10.187 --rc geninfo_all_blocks=1 00:29:10.187 --rc geninfo_unexecuted_blocks=1 00:29:10.187 00:29:10.187 ' 00:29:10.187 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:10.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.188 --rc genhtml_branch_coverage=1 00:29:10.188 --rc genhtml_function_coverage=1 00:29:10.188 --rc genhtml_legend=1 00:29:10.188 --rc geninfo_all_blocks=1 00:29:10.188 --rc geninfo_unexecuted_blocks=1 00:29:10.188 00:29:10.188 ' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:10.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.188 --rc genhtml_branch_coverage=1 00:29:10.188 --rc genhtml_function_coverage=1 00:29:10.188 --rc genhtml_legend=1 00:29:10.188 --rc geninfo_all_blocks=1 00:29:10.188 --rc geninfo_unexecuted_blocks=1 00:29:10.188 00:29:10.188 ' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:10.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.188 05:57:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.452 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.452 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.452 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.452 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.452 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.452 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:15.453 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:15.453 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:15.453 Found net devices under 0000:af:00.0: cvl_0_0 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:15.453 Found net devices under 0000:af:00.1: cvl_0_1 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:15.453 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:15.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:29:15.712 00:29:15.712 --- 10.0.0.2 ping statistics --- 00:29:15.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.712 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:15.712 00:29:15.712 --- 10.0.0.1 ping statistics --- 00:29:15.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.712 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=3480470 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 3480470 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3480470 ']' 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.712 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:15.712 [2024-12-16 05:57:49.424308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:15.712 [2024-12-16 05:57:49.424351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.712 [2024-12-16 05:57:49.484386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.712 [2024-12-16 05:57:49.525837] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.712 [2024-12-16 05:57:49.525879] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.712 [2024-12-16 05:57:49.525887] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.712 [2024-12-16 05:57:49.525893] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.712 [2024-12-16 05:57:49.525899] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.712 [2024-12-16 05:57:49.525942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.712 [2024-12-16 05:57:49.526031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.712 [2024-12-16 05:57:49.526124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.712 [2024-12-16 05:57:49.526125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.970 [2024-12-16 05:57:49.668865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.970 Malloc0 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.970 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.971 [2024-12-16 05:57:49.720132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.971 [ 00:29:15.971 { 00:29:15.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:15.971 "subtype": "Discovery", 00:29:15.971 "listen_addresses": [], 00:29:15.971 "allow_any_host": true, 00:29:15.971 "hosts": [] 00:29:15.971 }, 00:29:15.971 { 00:29:15.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:15.971 "subtype": "NVMe", 00:29:15.971 "listen_addresses": [ 00:29:15.971 { 00:29:15.971 "trtype": "TCP", 00:29:15.971 "adrfam": "IPv4", 00:29:15.971 "traddr": "10.0.0.2", 00:29:15.971 "trsvcid": "4420" 00:29:15.971 } 00:29:15.971 ], 00:29:15.971 "allow_any_host": true, 00:29:15.971 "hosts": [], 00:29:15.971 "serial_number": "SPDK00000000000001", 00:29:15.971 "model_number": "SPDK bdev Controller", 00:29:15.971 "max_namespaces": 2, 00:29:15.971 "min_cntlid": 1, 00:29:15.971 "max_cntlid": 65519, 00:29:15.971 "namespaces": [ 00:29:15.971 { 00:29:15.971 "nsid": 1, 00:29:15.971 "bdev_name": "Malloc0", 00:29:15.971 "name": "Malloc0", 00:29:15.971 "nguid": "E95DA755B043433BBA866BF5760159DA", 00:29:15.971 "uuid": "e95da755-b043-433b-ba86-6bf5760159da" 00:29:15.971 } 00:29:15.971 ] 00:29:15.971 } 00:29:15.971 ] 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3480675 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:15.971 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:16.228 05:57:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:16.228 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.228 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.228 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:16.228 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:16.228 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.228 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.486 Malloc1 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.486 [ 00:29:16.486 { 00:29:16.486 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:16.486 "subtype": "Discovery", 00:29:16.486 "listen_addresses": [], 00:29:16.486 "allow_any_host": true, 00:29:16.486 "hosts": [] 00:29:16.486 }, 00:29:16.486 { 00:29:16.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.486 "subtype": "NVMe", 00:29:16.486 "listen_addresses": [ 00:29:16.486 { 00:29:16.486 "trtype": "TCP", 00:29:16.486 "adrfam": "IPv4", 00:29:16.486 "traddr": "10.0.0.2", 00:29:16.486 "trsvcid": "4420" 00:29:16.486 } 00:29:16.486 ], 00:29:16.486 "allow_any_host": true, 00:29:16.486 "hosts": [], 00:29:16.486 "serial_number": "SPDK00000000000001", 00:29:16.486 "model_number": "SPDK bdev Controller", 00:29:16.486 "max_namespaces": 2, 00:29:16.486 "min_cntlid": 1, 00:29:16.486 "max_cntlid": 65519, 00:29:16.486 "namespaces": [ 00:29:16.486 { 00:29:16.486 "nsid": 1, 00:29:16.486 "bdev_name": "Malloc0", 00:29:16.486 "name": "Malloc0", 00:29:16.486 "nguid": "E95DA755B043433BBA866BF5760159DA", 00:29:16.486 "uuid": "e95da755-b043-433b-ba86-6bf5760159da" 00:29:16.486 }, 00:29:16.486 { 00:29:16.486 "nsid": 2, 00:29:16.486 "bdev_name": "Malloc1", 00:29:16.486 "name": "Malloc1", 00:29:16.486 Asynchronous Event Request test 00:29:16.486 Attaching to 10.0.0.2 00:29:16.486 Attached to 10.0.0.2 00:29:16.486 Registering asynchronous event callbacks... 00:29:16.486 Starting namespace attribute notice tests for all controllers... 00:29:16.486 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:16.486 aer_cb - Changed Namespace 00:29:16.486 Cleaning up... 00:29:16.486 "nguid": "B8266986F0F4402BA34AA21A0CD484DE", 00:29:16.486 "uuid": "b8266986-f0f4-402b-a34a-a21a0cd484de" 00:29:16.486 } 00:29:16.486 ] 00:29:16.486 } 00:29:16.486 ] 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3480675 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.486 rmmod nvme_tcp 00:29:16.486 rmmod nvme_fabrics 00:29:16.486 rmmod nvme_keyring 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 3480470 ']' 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 3480470 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3480470 ']' 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3480470 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3480470 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3480470' 00:29:16.486 killing process with pid 3480470 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3480470 00:29:16.486 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3480470 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:16.744 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.745 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.745 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.745 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.745 05:57:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.701 05:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.985 00:29:18.985 real 0m9.041s 00:29:18.985 user 0m5.312s 00:29:18.985 sys 0m4.620s 00:29:18.985 05:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.985 05:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:18.985 ************************************ 00:29:18.985 END TEST nvmf_aer 00:29:18.985 ************************************ 00:29:18.985 05:57:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:18.985 05:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:18.985 05:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.986 ************************************ 00:29:18.986 START TEST nvmf_async_init 00:29:18.986 ************************************ 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:18.986 * Looking for test storage... 00:29:18.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.986 --rc genhtml_branch_coverage=1 00:29:18.986 --rc genhtml_function_coverage=1 00:29:18.986 --rc genhtml_legend=1 00:29:18.986 --rc geninfo_all_blocks=1 00:29:18.986 --rc geninfo_unexecuted_blocks=1 00:29:18.986 00:29:18.986 ' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.986 --rc genhtml_branch_coverage=1 00:29:18.986 --rc genhtml_function_coverage=1 00:29:18.986 --rc genhtml_legend=1 00:29:18.986 --rc geninfo_all_blocks=1 00:29:18.986 --rc geninfo_unexecuted_blocks=1 00:29:18.986 00:29:18.986 ' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.986 --rc genhtml_branch_coverage=1 00:29:18.986 --rc genhtml_function_coverage=1 00:29:18.986 --rc genhtml_legend=1 00:29:18.986 --rc geninfo_all_blocks=1 00:29:18.986 --rc geninfo_unexecuted_blocks=1 00:29:18.986 00:29:18.986 ' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:18.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.986 --rc genhtml_branch_coverage=1 00:29:18.986 --rc genhtml_function_coverage=1 00:29:18.986 --rc genhtml_legend=1 00:29:18.986 --rc geninfo_all_blocks=1 00:29:18.986 --rc geninfo_unexecuted_blocks=1 00:29:18.986 00:29:18.986 ' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:18.986 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=afeddaaca2b04721867155e7f115dc70 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.987 05:57:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.252 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:24.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:24.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:24.253 Found net devices under 0000:af:00.0: cvl_0_0 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:24.253 Found net devices under 0000:af:00.1: cvl_0_1 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.253 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:29:24.566 00:29:24.566 --- 10.0.0.2 ping statistics --- 00:29:24.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.566 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:29:24.566 00:29:24.566 --- 10.0.0.1 ping statistics --- 00:29:24.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.566 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.566 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=3484151 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 3484151 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3484151 ']' 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.567 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.825 [2024-12-16 05:57:58.460991] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:24.825 [2024-12-16 05:57:58.461033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.825 [2024-12-16 05:57:58.519722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.825 [2024-12-16 05:57:58.559032] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.825 [2024-12-16 05:57:58.559069] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.825 [2024-12-16 05:57:58.559076] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.825 [2024-12-16 05:57:58.559083] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.825 [2024-12-16 05:57:58.559089] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.825 [2024-12-16 05:57:58.559112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.825 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.825 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:24.825 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:24.825 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.825 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.083 [2024-12-16 05:57:58.696285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.083 null0 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g afeddaaca2b04721867155e7f115dc70 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.083 [2024-12-16 05:57:58.744536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.083 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.342 nvme0n1 00:29:25.342 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.342 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:25.342 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.342 05:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.342 [ 00:29:25.342 { 00:29:25.342 "name": "nvme0n1", 00:29:25.342 "aliases": [ 00:29:25.342 "afeddaac-a2b0-4721-8671-55e7f115dc70" 00:29:25.342 ], 00:29:25.342 "product_name": "NVMe disk", 00:29:25.342 "block_size": 512, 00:29:25.342 "num_blocks": 2097152, 00:29:25.342 "uuid": "afeddaac-a2b0-4721-8671-55e7f115dc70", 00:29:25.342 "numa_id": 1, 00:29:25.342 "assigned_rate_limits": { 00:29:25.342 "rw_ios_per_sec": 0, 00:29:25.342 "rw_mbytes_per_sec": 0, 00:29:25.342 "r_mbytes_per_sec": 0, 00:29:25.342 "w_mbytes_per_sec": 0 00:29:25.342 }, 00:29:25.342 "claimed": false, 00:29:25.342 "zoned": false, 00:29:25.342 "supported_io_types": { 00:29:25.342 "read": true, 00:29:25.342 "write": true, 00:29:25.342 "unmap": false, 00:29:25.342 "flush": true, 00:29:25.342 "reset": true, 00:29:25.342 "nvme_admin": true, 00:29:25.342 "nvme_io": true, 00:29:25.342 "nvme_io_md": false, 00:29:25.342 "write_zeroes": true, 00:29:25.342 "zcopy": false, 00:29:25.342 "get_zone_info": false, 00:29:25.342 "zone_management": false, 00:29:25.342 "zone_append": false, 00:29:25.342 "compare": true, 00:29:25.342 "compare_and_write": true, 00:29:25.342 "abort": true, 00:29:25.342 "seek_hole": false, 00:29:25.342 "seek_data": false, 00:29:25.342 "copy": true, 00:29:25.342 "nvme_iov_md": false 00:29:25.342 }, 00:29:25.342 "memory_domains": [ 00:29:25.342 { 00:29:25.342 "dma_device_id": "system", 00:29:25.342 "dma_device_type": 1 00:29:25.342 } 00:29:25.342 ], 00:29:25.342 "driver_specific": { 00:29:25.342 "nvme": [ 00:29:25.342 { 00:29:25.342 "trid": { 00:29:25.342 "trtype": "TCP", 00:29:25.342 "adrfam": "IPv4", 00:29:25.342 "traddr": "10.0.0.2", 00:29:25.342 "trsvcid": "4420", 00:29:25.342 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:25.342 }, 00:29:25.342 "ctrlr_data": { 00:29:25.342 "cntlid": 1, 00:29:25.342 "vendor_id": "0x8086", 00:29:25.342 "model_number": "SPDK bdev Controller", 00:29:25.342 "serial_number": "00000000000000000000", 00:29:25.342 "firmware_revision": "24.09.1", 00:29:25.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.342 "oacs": { 00:29:25.342 "security": 0, 00:29:25.342 "format": 0, 00:29:25.342 "firmware": 0, 00:29:25.342 "ns_manage": 0 00:29:25.342 }, 00:29:25.342 "multi_ctrlr": true, 00:29:25.342 "ana_reporting": false 00:29:25.342 }, 00:29:25.342 "vs": { 00:29:25.342 "nvme_version": "1.3" 00:29:25.342 }, 00:29:25.342 "ns_data": { 00:29:25.342 "id": 1, 00:29:25.342 "can_share": true 00:29:25.342 } 00:29:25.342 } 00:29:25.342 ], 00:29:25.342 "mp_policy": "active_passive" 00:29:25.342 } 00:29:25.342 } 00:29:25.342 ] 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.342 [2024-12-16 05:57:59.009068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:25.342 [2024-12-16 05:57:59.009134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2019260 (9): Bad file descriptor 00:29:25.342 [2024-12-16 05:57:59.152932] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.342 [ 00:29:25.342 { 00:29:25.342 "name": "nvme0n1", 00:29:25.342 "aliases": [ 00:29:25.342 "afeddaac-a2b0-4721-8671-55e7f115dc70" 00:29:25.342 ], 00:29:25.342 "product_name": "NVMe disk", 00:29:25.342 "block_size": 512, 00:29:25.342 "num_blocks": 2097152, 00:29:25.342 "uuid": "afeddaac-a2b0-4721-8671-55e7f115dc70", 00:29:25.342 "numa_id": 1, 00:29:25.342 "assigned_rate_limits": { 00:29:25.342 "rw_ios_per_sec": 0, 00:29:25.342 "rw_mbytes_per_sec": 0, 00:29:25.342 "r_mbytes_per_sec": 0, 00:29:25.342 "w_mbytes_per_sec": 0 00:29:25.342 }, 00:29:25.342 "claimed": false, 00:29:25.342 "zoned": false, 00:29:25.342 "supported_io_types": { 00:29:25.342 "read": true, 00:29:25.342 "write": true, 00:29:25.342 "unmap": false, 00:29:25.342 "flush": true, 00:29:25.342 "reset": true, 00:29:25.342 "nvme_admin": true, 00:29:25.342 "nvme_io": true, 00:29:25.342 "nvme_io_md": false, 00:29:25.342 "write_zeroes": true, 00:29:25.342 "zcopy": false, 00:29:25.342 "get_zone_info": false, 00:29:25.342 "zone_management": false, 00:29:25.342 "zone_append": false, 00:29:25.342 "compare": true, 00:29:25.342 "compare_and_write": true, 00:29:25.342 "abort": true, 00:29:25.342 "seek_hole": false, 00:29:25.342 "seek_data": false, 00:29:25.342 "copy": true, 00:29:25.342 "nvme_iov_md": false 00:29:25.342 }, 00:29:25.342 "memory_domains": [ 00:29:25.342 { 00:29:25.342 "dma_device_id": "system", 00:29:25.342 "dma_device_type": 1 00:29:25.342 } 00:29:25.342 ], 00:29:25.342 "driver_specific": { 00:29:25.342 "nvme": [ 00:29:25.342 { 00:29:25.342 "trid": { 00:29:25.342 "trtype": "TCP", 00:29:25.342 "adrfam": "IPv4", 00:29:25.342 "traddr": "10.0.0.2", 00:29:25.342 "trsvcid": "4420", 00:29:25.342 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:25.342 }, 00:29:25.342 "ctrlr_data": { 00:29:25.342 "cntlid": 2, 00:29:25.342 "vendor_id": "0x8086", 00:29:25.342 "model_number": "SPDK bdev Controller", 00:29:25.342 "serial_number": "00000000000000000000", 00:29:25.342 "firmware_revision": "24.09.1", 00:29:25.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.342 "oacs": { 00:29:25.342 "security": 0, 00:29:25.342 "format": 0, 00:29:25.342 "firmware": 0, 00:29:25.342 "ns_manage": 0 00:29:25.342 }, 00:29:25.342 "multi_ctrlr": true, 00:29:25.342 "ana_reporting": false 00:29:25.342 }, 00:29:25.342 "vs": { 00:29:25.342 "nvme_version": "1.3" 00:29:25.342 }, 00:29:25.342 "ns_data": { 00:29:25.342 "id": 1, 00:29:25.342 "can_share": true 00:29:25.342 } 00:29:25.342 } 00:29:25.342 ], 00:29:25.342 "mp_policy": "active_passive" 00:29:25.342 } 00:29:25.342 } 00:29:25.342 ] 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:25.342 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pD4lM0P3as 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pD4lM0P3as 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.pD4lM0P3as 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.601 [2024-12-16 05:57:59.225711] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:25.601 [2024-12-16 05:57:59.225816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.601 [2024-12-16 05:57:59.249791] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:25.601 nvme0n1 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.601 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.601 [ 00:29:25.601 { 00:29:25.601 "name": "nvme0n1", 00:29:25.601 "aliases": [ 00:29:25.601 "afeddaac-a2b0-4721-8671-55e7f115dc70" 00:29:25.601 ], 00:29:25.601 "product_name": "NVMe disk", 00:29:25.601 "block_size": 512, 00:29:25.601 "num_blocks": 2097152, 00:29:25.601 "uuid": "afeddaac-a2b0-4721-8671-55e7f115dc70", 00:29:25.601 "numa_id": 1, 00:29:25.601 "assigned_rate_limits": { 00:29:25.601 "rw_ios_per_sec": 0, 00:29:25.601 "rw_mbytes_per_sec": 0, 00:29:25.601 "r_mbytes_per_sec": 0, 00:29:25.601 "w_mbytes_per_sec": 0 00:29:25.601 }, 00:29:25.601 "claimed": false, 00:29:25.601 "zoned": false, 00:29:25.601 "supported_io_types": { 00:29:25.601 "read": true, 00:29:25.601 "write": true, 00:29:25.601 "unmap": false, 00:29:25.601 "flush": true, 00:29:25.601 "reset": true, 00:29:25.601 "nvme_admin": true, 00:29:25.601 "nvme_io": true, 00:29:25.601 "nvme_io_md": false, 00:29:25.601 "write_zeroes": true, 00:29:25.601 "zcopy": false, 00:29:25.601 "get_zone_info": false, 00:29:25.601 "zone_management": false, 00:29:25.601 "zone_append": false, 00:29:25.601 "compare": true, 00:29:25.601 "compare_and_write": true, 00:29:25.601 "abort": true, 00:29:25.601 "seek_hole": false, 00:29:25.601 "seek_data": false, 00:29:25.601 "copy": true, 00:29:25.601 "nvme_iov_md": false 00:29:25.601 }, 00:29:25.601 "memory_domains": [ 00:29:25.601 { 00:29:25.601 "dma_device_id": "system", 00:29:25.601 "dma_device_type": 1 00:29:25.601 } 00:29:25.601 ], 00:29:25.601 "driver_specific": { 00:29:25.601 "nvme": [ 00:29:25.601 { 00:29:25.601 "trid": { 00:29:25.601 "trtype": "TCP", 00:29:25.601 "adrfam": "IPv4", 00:29:25.601 "traddr": "10.0.0.2", 00:29:25.601 "trsvcid": "4421", 00:29:25.601 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:25.601 }, 00:29:25.601 "ctrlr_data": { 00:29:25.601 "cntlid": 3, 00:29:25.601 "vendor_id": "0x8086", 00:29:25.601 "model_number": "SPDK bdev Controller", 00:29:25.601 "serial_number": "00000000000000000000", 00:29:25.601 "firmware_revision": "24.09.1", 00:29:25.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.601 "oacs": { 00:29:25.602 "security": 0, 00:29:25.602 "format": 0, 00:29:25.602 "firmware": 0, 00:29:25.602 "ns_manage": 0 00:29:25.602 }, 00:29:25.602 "multi_ctrlr": true, 00:29:25.602 "ana_reporting": false 00:29:25.602 }, 00:29:25.602 "vs": { 00:29:25.602 "nvme_version": "1.3" 00:29:25.602 }, 00:29:25.602 "ns_data": { 00:29:25.602 "id": 1, 00:29:25.602 "can_share": true 00:29:25.602 } 00:29:25.602 } 00:29:25.602 ], 00:29:25.602 "mp_policy": "active_passive" 00:29:25.602 } 00:29:25.602 } 00:29:25.602 ] 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.pD4lM0P3as 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.602 rmmod nvme_tcp 00:29:25.602 rmmod nvme_fabrics 00:29:25.602 rmmod nvme_keyring 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 3484151 ']' 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 3484151 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3484151 ']' 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3484151 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.602 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3484151 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3484151' 00:29:25.860 killing process with pid 3484151 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3484151 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3484151 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.860 05:57:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.391 00:29:28.391 real 0m9.103s 00:29:28.391 user 0m2.904s 00:29:28.391 sys 0m4.547s 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:28.391 ************************************ 00:29:28.391 END TEST nvmf_async_init 00:29:28.391 ************************************ 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.391 ************************************ 00:29:28.391 START TEST dma 00:29:28.391 ************************************ 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:28.391 * Looking for test storage... 00:29:28.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.391 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:28.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.391 --rc genhtml_branch_coverage=1 00:29:28.391 --rc genhtml_function_coverage=1 00:29:28.391 --rc genhtml_legend=1 00:29:28.391 --rc geninfo_all_blocks=1 00:29:28.391 --rc geninfo_unexecuted_blocks=1 00:29:28.391 00:29:28.391 ' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:28.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.392 --rc genhtml_branch_coverage=1 00:29:28.392 --rc genhtml_function_coverage=1 00:29:28.392 --rc genhtml_legend=1 00:29:28.392 --rc geninfo_all_blocks=1 00:29:28.392 --rc geninfo_unexecuted_blocks=1 00:29:28.392 00:29:28.392 ' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:28.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.392 --rc genhtml_branch_coverage=1 00:29:28.392 --rc genhtml_function_coverage=1 00:29:28.392 --rc genhtml_legend=1 00:29:28.392 --rc geninfo_all_blocks=1 00:29:28.392 --rc geninfo_unexecuted_blocks=1 00:29:28.392 00:29:28.392 ' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:28.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.392 --rc genhtml_branch_coverage=1 00:29:28.392 --rc genhtml_function_coverage=1 00:29:28.392 --rc genhtml_legend=1 00:29:28.392 --rc geninfo_all_blocks=1 00:29:28.392 --rc geninfo_unexecuted_blocks=1 00:29:28.392 00:29:28.392 ' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:28.392 00:29:28.392 real 0m0.198s 00:29:28.392 user 0m0.118s 00:29:28.392 sys 0m0.094s 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.392 05:58:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:28.392 ************************************ 00:29:28.392 END TEST dma 00:29:28.392 ************************************ 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.392 ************************************ 00:29:28.392 START TEST nvmf_identify 00:29:28.392 ************************************ 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:28.392 * Looking for test storage... 00:29:28.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.392 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.393 --rc genhtml_branch_coverage=1 00:29:28.393 --rc genhtml_function_coverage=1 00:29:28.393 --rc genhtml_legend=1 00:29:28.393 --rc geninfo_all_blocks=1 00:29:28.393 --rc geninfo_unexecuted_blocks=1 00:29:28.393 00:29:28.393 ' 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.393 --rc genhtml_branch_coverage=1 00:29:28.393 --rc genhtml_function_coverage=1 00:29:28.393 --rc genhtml_legend=1 00:29:28.393 --rc geninfo_all_blocks=1 00:29:28.393 --rc geninfo_unexecuted_blocks=1 00:29:28.393 00:29:28.393 ' 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.393 --rc genhtml_branch_coverage=1 00:29:28.393 --rc genhtml_function_coverage=1 00:29:28.393 --rc genhtml_legend=1 00:29:28.393 --rc geninfo_all_blocks=1 00:29:28.393 --rc geninfo_unexecuted_blocks=1 00:29:28.393 00:29:28.393 ' 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:28.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.393 --rc genhtml_branch_coverage=1 00:29:28.393 --rc genhtml_function_coverage=1 00:29:28.393 --rc genhtml_legend=1 00:29:28.393 --rc geninfo_all_blocks=1 00:29:28.393 --rc geninfo_unexecuted_blocks=1 00:29:28.393 00:29:28.393 ' 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.393 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.652 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:28.653 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:28.653 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.653 05:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.217 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:35.218 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:35.218 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:35.218 Found net devices under 0000:af:00.0: cvl_0_0 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:35.218 Found net devices under 0000:af:00.1: cvl_0_1 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.218 05:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:29:35.218 00:29:35.218 --- 10.0.0.2 ping statistics --- 00:29:35.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.218 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:29:35.218 00:29:35.218 --- 10.0.0.1 ping statistics --- 00:29:35.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.218 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3487908 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3487908 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3487908 ']' 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.218 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.218 [2024-12-16 05:58:08.182288] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:35.218 [2024-12-16 05:58:08.182334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.218 [2024-12-16 05:58:08.241631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.218 [2024-12-16 05:58:08.283626] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.218 [2024-12-16 05:58:08.283668] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.218 [2024-12-16 05:58:08.283676] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.218 [2024-12-16 05:58:08.283682] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.218 [2024-12-16 05:58:08.283687] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.219 [2024-12-16 05:58:08.283740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.219 [2024-12-16 05:58:08.283818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.219 [2024-12-16 05:58:08.283907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.219 [2024-12-16 05:58:08.283908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 [2024-12-16 05:58:08.394516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 Malloc0 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 [2024-12-16 05:58:08.482238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.219 [ 00:29:35.219 { 00:29:35.219 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:35.219 "subtype": "Discovery", 00:29:35.219 "listen_addresses": [ 00:29:35.219 { 00:29:35.219 "trtype": "TCP", 00:29:35.219 "adrfam": "IPv4", 00:29:35.219 "traddr": "10.0.0.2", 00:29:35.219 "trsvcid": "4420" 00:29:35.219 } 00:29:35.219 ], 00:29:35.219 "allow_any_host": true, 00:29:35.219 "hosts": [] 00:29:35.219 }, 00:29:35.219 { 00:29:35.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.219 "subtype": "NVMe", 00:29:35.219 "listen_addresses": [ 00:29:35.219 { 00:29:35.219 "trtype": "TCP", 00:29:35.219 "adrfam": "IPv4", 00:29:35.219 "traddr": "10.0.0.2", 00:29:35.219 "trsvcid": "4420" 00:29:35.219 } 00:29:35.219 ], 00:29:35.219 "allow_any_host": true, 00:29:35.219 "hosts": [], 00:29:35.219 "serial_number": "SPDK00000000000001", 00:29:35.219 "model_number": "SPDK bdev Controller", 00:29:35.219 "max_namespaces": 32, 00:29:35.219 "min_cntlid": 1, 00:29:35.219 "max_cntlid": 65519, 00:29:35.219 "namespaces": [ 00:29:35.219 { 00:29:35.219 "nsid": 1, 00:29:35.219 "bdev_name": "Malloc0", 00:29:35.219 "name": "Malloc0", 00:29:35.219 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:35.219 "eui64": "ABCDEF0123456789", 00:29:35.219 "uuid": "0a20d417-b447-42fc-a15f-14edbd35cf7c" 00:29:35.219 } 00:29:35.219 ] 00:29:35.219 } 00:29:35.219 ] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.219 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:35.219 [2024-12-16 05:58:08.533631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:35.219 [2024-12-16 05:58:08.533664] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487940 ] 00:29:35.219 [2024-12-16 05:58:08.564153] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:35.219 [2024-12-16 05:58:08.564200] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:35.219 [2024-12-16 05:58:08.564205] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:35.219 [2024-12-16 05:58:08.564216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:35.219 [2024-12-16 05:58:08.564225] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:35.219 [2024-12-16 05:58:08.564735] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:35.219 [2024-12-16 05:58:08.564770] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xafb0d0 0 00:29:35.219 [2024-12-16 05:58:08.582860] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:35.219 [2024-12-16 05:58:08.582874] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:35.219 [2024-12-16 05:58:08.582879] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:35.219 [2024-12-16 05:58:08.582882] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:35.219 [2024-12-16 05:58:08.582911] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.582916] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.582920] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.219 [2024-12-16 05:58:08.582931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:35.219 [2024-12-16 05:58:08.582949] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.219 [2024-12-16 05:58:08.590860] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.219 [2024-12-16 05:58:08.590870] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.219 [2024-12-16 05:58:08.590874] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.590878] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.219 [2024-12-16 05:58:08.590891] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:35.219 [2024-12-16 05:58:08.590898] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:35.219 [2024-12-16 05:58:08.590903] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:35.219 [2024-12-16 05:58:08.590916] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.590920] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.590925] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.219 [2024-12-16 05:58:08.590932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.219 [2024-12-16 05:58:08.590946] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.219 [2024-12-16 05:58:08.591120] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.219 [2024-12-16 05:58:08.591126] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.219 [2024-12-16 05:58:08.591129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.591132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.219 [2024-12-16 05:58:08.591137] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:35.219 [2024-12-16 05:58:08.591143] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:35.219 [2024-12-16 05:58:08.591149] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.591152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.591156] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.219 [2024-12-16 05:58:08.591161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.219 [2024-12-16 05:58:08.591172] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.219 [2024-12-16 05:58:08.591232] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.219 [2024-12-16 05:58:08.591237] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.219 [2024-12-16 05:58:08.591240] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.591243] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.219 [2024-12-16 05:58:08.591248] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:35.219 [2024-12-16 05:58:08.591255] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:35.219 [2024-12-16 05:58:08.591260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.591264] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.219 [2024-12-16 05:58:08.591267] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.591273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.220 [2024-12-16 05:58:08.591282] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.220 [2024-12-16 05:58:08.591348] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.220 [2024-12-16 05:58:08.591354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.220 [2024-12-16 05:58:08.591357] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591360] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.220 [2024-12-16 05:58:08.591365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:35.220 [2024-12-16 05:58:08.591372] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591376] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591379] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.591385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.220 [2024-12-16 05:58:08.591396] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.220 [2024-12-16 05:58:08.591460] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.220 [2024-12-16 05:58:08.591466] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.220 [2024-12-16 05:58:08.591468] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.220 [2024-12-16 05:58:08.591476] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:35.220 [2024-12-16 05:58:08.591480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:35.220 [2024-12-16 05:58:08.591487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:35.220 [2024-12-16 05:58:08.591591] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:35.220 [2024-12-16 05:58:08.591596] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:35.220 [2024-12-16 05:58:08.591604] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591610] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.591616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.220 [2024-12-16 05:58:08.591625] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.220 [2024-12-16 05:58:08.591688] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.220 [2024-12-16 05:58:08.591693] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.220 [2024-12-16 05:58:08.591696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.220 [2024-12-16 05:58:08.591703] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:35.220 [2024-12-16 05:58:08.591711] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591718] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.591723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.220 [2024-12-16 05:58:08.591733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.220 [2024-12-16 05:58:08.591806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.220 [2024-12-16 05:58:08.591811] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.220 [2024-12-16 05:58:08.591814] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591818] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.220 [2024-12-16 05:58:08.591821] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:35.220 [2024-12-16 05:58:08.591825] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:35.220 [2024-12-16 05:58:08.591832] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:35.220 [2024-12-16 05:58:08.591845] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:35.220 [2024-12-16 05:58:08.591858] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591861] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.591867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.220 [2024-12-16 05:58:08.591877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.220 [2024-12-16 05:58:08.591971] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.220 [2024-12-16 05:58:08.591976] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.220 [2024-12-16 05:58:08.591980] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.591983] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafb0d0): datao=0, datal=4096, cccid=0 00:29:35.220 [2024-12-16 05:58:08.591987] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb65540) on tqpair(0xafb0d0): expected_datao=0, payload_size=4096 00:29:35.220 [2024-12-16 05:58:08.591991] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.592006] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.592011] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.637855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.220 [2024-12-16 05:58:08.637865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.220 [2024-12-16 05:58:08.637868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.637871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.220 [2024-12-16 05:58:08.637879] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:35.220 [2024-12-16 05:58:08.637884] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:35.220 [2024-12-16 05:58:08.637888] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:35.220 [2024-12-16 05:58:08.637892] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:35.220 [2024-12-16 05:58:08.637896] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:35.220 [2024-12-16 05:58:08.637900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:35.220 [2024-12-16 05:58:08.637909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:35.220 [2024-12-16 05:58:08.637916] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.637920] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.637923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.637930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:35.220 [2024-12-16 05:58:08.637942] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.220 [2024-12-16 05:58:08.638089] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.220 [2024-12-16 05:58:08.638094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.220 [2024-12-16 05:58:08.638097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638100] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.220 [2024-12-16 05:58:08.638110] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638113] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.638122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.220 [2024-12-16 05:58:08.638127] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.638138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.220 [2024-12-16 05:58:08.638144] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638150] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.638155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.220 [2024-12-16 05:58:08.638160] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638163] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.638171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.220 [2024-12-16 05:58:08.638175] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:35.220 [2024-12-16 05:58:08.638186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:35.220 [2024-12-16 05:58:08.638192] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.220 [2024-12-16 05:58:08.638195] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafb0d0) 00:29:35.220 [2024-12-16 05:58:08.638201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.220 [2024-12-16 05:58:08.638213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65540, cid 0, qid 0 00:29:35.220 [2024-12-16 05:58:08.638217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb656c0, cid 1, qid 0 00:29:35.220 [2024-12-16 05:58:08.638221] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65840, cid 2, qid 0 00:29:35.221 [2024-12-16 05:58:08.638225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.221 [2024-12-16 05:58:08.638229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65b40, cid 4, qid 0 00:29:35.221 [2024-12-16 05:58:08.638338] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.221 [2024-12-16 05:58:08.638343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.221 [2024-12-16 05:58:08.638346] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638350] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65b40) on tqpair=0xafb0d0 00:29:35.221 [2024-12-16 05:58:08.638354] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:35.221 [2024-12-16 05:58:08.638359] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:35.221 [2024-12-16 05:58:08.638368] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638372] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafb0d0) 00:29:35.221 [2024-12-16 05:58:08.638382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.221 [2024-12-16 05:58:08.638392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65b40, cid 4, qid 0 00:29:35.221 [2024-12-16 05:58:08.638461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.221 [2024-12-16 05:58:08.638466] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.221 [2024-12-16 05:58:08.638469] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638473] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafb0d0): datao=0, datal=4096, cccid=4 00:29:35.221 [2024-12-16 05:58:08.638477] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb65b40) on tqpair(0xafb0d0): expected_datao=0, payload_size=4096 00:29:35.221 [2024-12-16 05:58:08.638480] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638507] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638511] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638542] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.221 [2024-12-16 05:58:08.638548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.221 [2024-12-16 05:58:08.638551] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638554] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65b40) on tqpair=0xafb0d0 00:29:35.221 [2024-12-16 05:58:08.638566] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:35.221 [2024-12-16 05:58:08.638589] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafb0d0) 00:29:35.221 [2024-12-16 05:58:08.638598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.221 [2024-12-16 05:58:08.638604] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638610] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafb0d0) 00:29:35.221 [2024-12-16 05:58:08.638616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.221 [2024-12-16 05:58:08.638627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65b40, cid 4, qid 0 00:29:35.221 [2024-12-16 05:58:08.638632] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65cc0, cid 5, qid 0 00:29:35.221 [2024-12-16 05:58:08.638733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.221 [2024-12-16 05:58:08.638738] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.221 [2024-12-16 05:58:08.638741] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638745] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafb0d0): datao=0, datal=1024, cccid=4 00:29:35.221 [2024-12-16 05:58:08.638748] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb65b40) on tqpair(0xafb0d0): expected_datao=0, payload_size=1024 00:29:35.221 [2024-12-16 05:58:08.638752] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638757] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638761] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638765] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.221 [2024-12-16 05:58:08.638770] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.221 [2024-12-16 05:58:08.638773] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.638777] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65cc0) on tqpair=0xafb0d0 00:29:35.221 [2024-12-16 05:58:08.679983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.221 [2024-12-16 05:58:08.679995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.221 [2024-12-16 05:58:08.679999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.680002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65b40) on tqpair=0xafb0d0 00:29:35.221 [2024-12-16 05:58:08.680013] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.680016] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafb0d0) 00:29:35.221 [2024-12-16 05:58:08.680023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.221 [2024-12-16 05:58:08.680039] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65b40, cid 4, qid 0 00:29:35.221 [2024-12-16 05:58:08.680146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.221 [2024-12-16 05:58:08.680151] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.221 [2024-12-16 05:58:08.680154] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.680157] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafb0d0): datao=0, datal=3072, cccid=4 00:29:35.221 [2024-12-16 05:58:08.680161] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb65b40) on tqpair(0xafb0d0): expected_datao=0, payload_size=3072 00:29:35.221 [2024-12-16 05:58:08.680165] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.680177] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.680181] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.723860] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.221 [2024-12-16 05:58:08.723874] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.221 [2024-12-16 05:58:08.723878] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.723882] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65b40) on tqpair=0xafb0d0 00:29:35.221 [2024-12-16 05:58:08.723891] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.723895] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafb0d0) 00:29:35.221 [2024-12-16 05:58:08.723902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.221 [2024-12-16 05:58:08.723918] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb65b40, cid 4, qid 0 00:29:35.221 [2024-12-16 05:58:08.724005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.221 [2024-12-16 05:58:08.724011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.221 [2024-12-16 05:58:08.724014] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.724017] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafb0d0): datao=0, datal=8, cccid=4 00:29:35.221 [2024-12-16 05:58:08.724022] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb65b40) on tqpair(0xafb0d0): expected_datao=0, payload_size=8 00:29:35.221 [2024-12-16 05:58:08.724026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.724032] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.724035] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.769858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.221 [2024-12-16 05:58:08.769868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.221 [2024-12-16 05:58:08.769871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.221 [2024-12-16 05:58:08.769875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65b40) on tqpair=0xafb0d0 00:29:35.221 ===================================================== 00:29:35.221 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:35.221 ===================================================== 00:29:35.221 Controller Capabilities/Features 00:29:35.221 ================================ 00:29:35.221 Vendor ID: 0000 00:29:35.221 Subsystem Vendor ID: 0000 00:29:35.221 Serial Number: .................... 00:29:35.221 Model Number: ........................................ 00:29:35.221 Firmware Version: 24.09.1 00:29:35.221 Recommended Arb Burst: 0 00:29:35.221 IEEE OUI Identifier: 00 00 00 00:29:35.221 Multi-path I/O 00:29:35.221 May have multiple subsystem ports: No 00:29:35.221 May have multiple controllers: No 00:29:35.221 Associated with SR-IOV VF: No 00:29:35.221 Max Data Transfer Size: 131072 00:29:35.221 Max Number of Namespaces: 0 00:29:35.221 Max Number of I/O Queues: 1024 00:29:35.221 NVMe Specification Version (VS): 1.3 00:29:35.221 NVMe Specification Version (Identify): 1.3 00:29:35.221 Maximum Queue Entries: 128 00:29:35.221 Contiguous Queues Required: Yes 00:29:35.221 Arbitration Mechanisms Supported 00:29:35.221 Weighted Round Robin: Not Supported 00:29:35.221 Vendor Specific: Not Supported 00:29:35.221 Reset Timeout: 15000 ms 00:29:35.221 Doorbell Stride: 4 bytes 00:29:35.221 NVM Subsystem Reset: Not Supported 00:29:35.221 Command Sets Supported 00:29:35.221 NVM Command Set: Supported 00:29:35.221 Boot Partition: Not Supported 00:29:35.221 Memory Page Size Minimum: 4096 bytes 00:29:35.221 Memory Page Size Maximum: 4096 bytes 00:29:35.221 Persistent Memory Region: Not Supported 00:29:35.221 Optional Asynchronous Events Supported 00:29:35.221 Namespace Attribute Notices: Not Supported 00:29:35.221 Firmware Activation Notices: Not Supported 00:29:35.221 ANA Change Notices: Not Supported 00:29:35.221 PLE Aggregate Log Change Notices: Not Supported 00:29:35.221 LBA Status Info Alert Notices: Not Supported 00:29:35.221 EGE Aggregate Log Change Notices: Not Supported 00:29:35.221 Normal NVM Subsystem Shutdown event: Not Supported 00:29:35.221 Zone Descriptor Change Notices: Not Supported 00:29:35.221 Discovery Log Change Notices: Supported 00:29:35.221 Controller Attributes 00:29:35.222 128-bit Host Identifier: Not Supported 00:29:35.222 Non-Operational Permissive Mode: Not Supported 00:29:35.222 NVM Sets: Not Supported 00:29:35.222 Read Recovery Levels: Not Supported 00:29:35.222 Endurance Groups: Not Supported 00:29:35.222 Predictable Latency Mode: Not Supported 00:29:35.222 Traffic Based Keep ALive: Not Supported 00:29:35.222 Namespace Granularity: Not Supported 00:29:35.222 SQ Associations: Not Supported 00:29:35.222 UUID List: Not Supported 00:29:35.222 Multi-Domain Subsystem: Not Supported 00:29:35.222 Fixed Capacity Management: Not Supported 00:29:35.222 Variable Capacity Management: Not Supported 00:29:35.222 Delete Endurance Group: Not Supported 00:29:35.222 Delete NVM Set: Not Supported 00:29:35.222 Extended LBA Formats Supported: Not Supported 00:29:35.222 Flexible Data Placement Supported: Not Supported 00:29:35.222 00:29:35.222 Controller Memory Buffer Support 00:29:35.222 ================================ 00:29:35.222 Supported: No 00:29:35.222 00:29:35.222 Persistent Memory Region Support 00:29:35.222 ================================ 00:29:35.222 Supported: No 00:29:35.222 00:29:35.222 Admin Command Set Attributes 00:29:35.222 ============================ 00:29:35.222 Security Send/Receive: Not Supported 00:29:35.222 Format NVM: Not Supported 00:29:35.222 Firmware Activate/Download: Not Supported 00:29:35.222 Namespace Management: Not Supported 00:29:35.222 Device Self-Test: Not Supported 00:29:35.222 Directives: Not Supported 00:29:35.222 NVMe-MI: Not Supported 00:29:35.222 Virtualization Management: Not Supported 00:29:35.222 Doorbell Buffer Config: Not Supported 00:29:35.222 Get LBA Status Capability: Not Supported 00:29:35.222 Command & Feature Lockdown Capability: Not Supported 00:29:35.222 Abort Command Limit: 1 00:29:35.222 Async Event Request Limit: 4 00:29:35.222 Number of Firmware Slots: N/A 00:29:35.222 Firmware Slot 1 Read-Only: N/A 00:29:35.222 Firmware Activation Without Reset: N/A 00:29:35.222 Multiple Update Detection Support: N/A 00:29:35.222 Firmware Update Granularity: No Information Provided 00:29:35.222 Per-Namespace SMART Log: No 00:29:35.222 Asymmetric Namespace Access Log Page: Not Supported 00:29:35.222 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:35.222 Command Effects Log Page: Not Supported 00:29:35.222 Get Log Page Extended Data: Supported 00:29:35.222 Telemetry Log Pages: Not Supported 00:29:35.222 Persistent Event Log Pages: Not Supported 00:29:35.222 Supported Log Pages Log Page: May Support 00:29:35.222 Commands Supported & Effects Log Page: Not Supported 00:29:35.222 Feature Identifiers & Effects Log Page:May Support 00:29:35.222 NVMe-MI Commands & Effects Log Page: May Support 00:29:35.222 Data Area 4 for Telemetry Log: Not Supported 00:29:35.222 Error Log Page Entries Supported: 128 00:29:35.222 Keep Alive: Not Supported 00:29:35.222 00:29:35.222 NVM Command Set Attributes 00:29:35.222 ========================== 00:29:35.222 Submission Queue Entry Size 00:29:35.222 Max: 1 00:29:35.222 Min: 1 00:29:35.222 Completion Queue Entry Size 00:29:35.222 Max: 1 00:29:35.222 Min: 1 00:29:35.222 Number of Namespaces: 0 00:29:35.222 Compare Command: Not Supported 00:29:35.222 Write Uncorrectable Command: Not Supported 00:29:35.222 Dataset Management Command: Not Supported 00:29:35.222 Write Zeroes Command: Not Supported 00:29:35.222 Set Features Save Field: Not Supported 00:29:35.222 Reservations: Not Supported 00:29:35.222 Timestamp: Not Supported 00:29:35.222 Copy: Not Supported 00:29:35.222 Volatile Write Cache: Not Present 00:29:35.222 Atomic Write Unit (Normal): 1 00:29:35.222 Atomic Write Unit (PFail): 1 00:29:35.222 Atomic Compare & Write Unit: 1 00:29:35.222 Fused Compare & Write: Supported 00:29:35.222 Scatter-Gather List 00:29:35.222 SGL Command Set: Supported 00:29:35.222 SGL Keyed: Supported 00:29:35.222 SGL Bit Bucket Descriptor: Not Supported 00:29:35.222 SGL Metadata Pointer: Not Supported 00:29:35.222 Oversized SGL: Not Supported 00:29:35.222 SGL Metadata Address: Not Supported 00:29:35.222 SGL Offset: Supported 00:29:35.222 Transport SGL Data Block: Not Supported 00:29:35.222 Replay Protected Memory Block: Not Supported 00:29:35.222 00:29:35.222 Firmware Slot Information 00:29:35.222 ========================= 00:29:35.222 Active slot: 0 00:29:35.222 00:29:35.222 00:29:35.222 Error Log 00:29:35.222 ========= 00:29:35.222 00:29:35.222 Active Namespaces 00:29:35.222 ================= 00:29:35.222 Discovery Log Page 00:29:35.222 ================== 00:29:35.222 Generation Counter: 2 00:29:35.222 Number of Records: 2 00:29:35.222 Record Format: 0 00:29:35.222 00:29:35.222 Discovery Log Entry 0 00:29:35.222 ---------------------- 00:29:35.222 Transport Type: 3 (TCP) 00:29:35.222 Address Family: 1 (IPv4) 00:29:35.222 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:35.222 Entry Flags: 00:29:35.222 Duplicate Returned Information: 1 00:29:35.222 Explicit Persistent Connection Support for Discovery: 1 00:29:35.222 Transport Requirements: 00:29:35.222 Secure Channel: Not Required 00:29:35.222 Port ID: 0 (0x0000) 00:29:35.222 Controller ID: 65535 (0xffff) 00:29:35.222 Admin Max SQ Size: 128 00:29:35.222 Transport Service Identifier: 4420 00:29:35.222 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:35.222 Transport Address: 10.0.0.2 00:29:35.222 Discovery Log Entry 1 00:29:35.222 ---------------------- 00:29:35.222 Transport Type: 3 (TCP) 00:29:35.222 Address Family: 1 (IPv4) 00:29:35.222 Subsystem Type: 2 (NVM Subsystem) 00:29:35.222 Entry Flags: 00:29:35.222 Duplicate Returned Information: 0 00:29:35.222 Explicit Persistent Connection Support for Discovery: 0 00:29:35.222 Transport Requirements: 00:29:35.222 Secure Channel: Not Required 00:29:35.222 Port ID: 0 (0x0000) 00:29:35.222 Controller ID: 65535 (0xffff) 00:29:35.222 Admin Max SQ Size: 128 00:29:35.222 Transport Service Identifier: 4420 00:29:35.222 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:35.222 Transport Address: 10.0.0.2 [2024-12-16 05:58:08.769953] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:35.222 [2024-12-16 05:58:08.769964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65540) on tqpair=0xafb0d0 00:29:35.222 [2024-12-16 05:58:08.769970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.222 [2024-12-16 05:58:08.769975] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb656c0) on tqpair=0xafb0d0 00:29:35.222 [2024-12-16 05:58:08.769979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.222 [2024-12-16 05:58:08.769983] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb65840) on tqpair=0xafb0d0 00:29:35.222 [2024-12-16 05:58:08.769987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.222 [2024-12-16 05:58:08.769991] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.222 [2024-12-16 05:58:08.769995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.222 [2024-12-16 05:58:08.770002] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.222 [2024-12-16 05:58:08.770006] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.222 [2024-12-16 05:58:08.770009] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.222 [2024-12-16 05:58:08.770016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770028] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770091] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770097] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770100] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770103] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770133] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770201] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770207] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770210] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770217] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:35.223 [2024-12-16 05:58:08.770224] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:35.223 [2024-12-16 05:58:08.770232] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770236] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770239] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770253] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770314] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770323] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770334] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770338] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770341] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770355] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770418] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770427] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770430] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770438] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770442] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770445] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770462] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770526] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770538] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770544] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770559] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770563] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770578] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770644] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770653] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770656] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770664] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770667] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770671] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770781] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770802] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770867] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770876] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770887] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.770909] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.770967] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.770973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.770975] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770979] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.770987] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770990] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.770993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.770999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.771007] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.771072] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.771078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.771081] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771084] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.771093] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771100] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.771106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.771115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.771178] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.771186] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.771191] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.771203] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771206] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771209] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.223 [2024-12-16 05:58:08.771214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.223 [2024-12-16 05:58:08.771226] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.223 [2024-12-16 05:58:08.771287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.223 [2024-12-16 05:58:08.771293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.223 [2024-12-16 05:58:08.771296] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771299] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.223 [2024-12-16 05:58:08.771307] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771310] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.223 [2024-12-16 05:58:08.771314] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.771322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.771331] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.771410] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.771415] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.771418] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771423] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.771433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771436] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.771445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.771455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.771520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.771526] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.771529] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771532] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.771540] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771549] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.771556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.771566] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.771634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.771641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.771645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771650] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.771660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.771673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.771682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.771742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.771748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.771751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771754] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.771762] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771768] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.771774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.771783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.771851] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.771857] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.771860] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771864] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.771873] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771876] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.771885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.771894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.771957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.771962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.771965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771968] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.771976] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771980] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.771983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.771988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.771997] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.772061] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.772067] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.772070] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772073] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.772081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.772095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.772104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.772179] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.772185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.772188] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772191] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.772200] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772203] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772207] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.772212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.772221] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.772289] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.772294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.772297] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.772309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772312] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.772321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.772330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.772388] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.772393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.772396] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.772407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772411] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772414] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.772419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.772428] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.772491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.772496] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.772499] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.772510] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.772524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.772533] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.224 [2024-12-16 05:58:08.772596] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.224 [2024-12-16 05:58:08.772601] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.224 [2024-12-16 05:58:08.772604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772607] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.224 [2024-12-16 05:58:08.772615] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772618] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.224 [2024-12-16 05:58:08.772622] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.224 [2024-12-16 05:58:08.772627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.224 [2024-12-16 05:58:08.772636] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.772713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.772719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.772722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.772733] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772737] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.772745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.772754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.772812] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.772818] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.772821] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772824] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.772832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772835] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772838] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.772844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.772857] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.772917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.772923] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.772926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.772937] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772940] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.772944] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.772951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.772960] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773031] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773034] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773038] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773067] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773137] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773143] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773146] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773149] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773158] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773161] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773164] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773247] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773250] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773254] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773265] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773268] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773282] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773355] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773358] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773361] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773370] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773374] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773377] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773393] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773459] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773462] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773466] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773566] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773569] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773573] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773580] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773587] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773601] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773659] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773667] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773678] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773682] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773685] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773761] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773769] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773780] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773801] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773878] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.225 [2024-12-16 05:58:08.773899] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773902] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.225 [2024-12-16 05:58:08.773905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.225 [2024-12-16 05:58:08.773911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.225 [2024-12-16 05:58:08.773920] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.225 [2024-12-16 05:58:08.773981] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.225 [2024-12-16 05:58:08.773987] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.225 [2024-12-16 05:58:08.773990] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.773993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774001] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774023] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774088] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774098] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774130] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774192] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774198] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774200] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774211] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774215] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774218] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774232] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774293] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774303] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774306] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774314] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774317] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774404] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774407] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774422] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774439] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774496] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774505] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774508] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774517] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774520] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774523] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774539] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774637] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774653] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774721] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774733] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774737] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774813] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774818] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774821] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774824] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774836] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774839] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774858] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.774930] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.774936] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.774938] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774942] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.774950] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774954] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.774957] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.774963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.774972] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.775036] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.226 [2024-12-16 05:58:08.775042] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.226 [2024-12-16 05:58:08.775045] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.775048] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.226 [2024-12-16 05:58:08.775057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.775060] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.226 [2024-12-16 05:58:08.775063] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.226 [2024-12-16 05:58:08.775069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.226 [2024-12-16 05:58:08.775078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.226 [2024-12-16 05:58:08.775141] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.775147] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.775150] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.775162] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775169] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.775174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.775183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.775242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.775249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.775252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775255] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.775263] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.775275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.775283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.775342] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.775347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.775350] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775353] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.775361] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775368] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.775373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.775383] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.775454] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.775459] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.775462] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775465] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.775474] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775478] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775481] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.775486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.775495] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.775557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.775562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.775565] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775568] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.775578] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775581] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775584] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.775590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.775599] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.775656] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.775661] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.775664] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775668] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.775676] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775679] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775683] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.775688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.775697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.775768] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.775773] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.775777] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.775788] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.775795] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.775800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.775810] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.779855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.779864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.779867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.779870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.779880] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.779883] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.779886] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafb0d0) 00:29:35.227 [2024-12-16 05:58:08.779892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.227 [2024-12-16 05:58:08.779903] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb659c0, cid 3, qid 0 00:29:35.227 [2024-12-16 05:58:08.780058] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.780064] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.780067] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.780070] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb659c0) on tqpair=0xafb0d0 00:29:35.227 [2024-12-16 05:58:08.780078] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:29:35.227 00:29:35.227 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:35.227 [2024-12-16 05:58:08.810876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:35.227 [2024-12-16 05:58:08.810909] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487948 ] 00:29:35.227 [2024-12-16 05:58:08.838861] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:35.227 [2024-12-16 05:58:08.838904] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:35.227 [2024-12-16 05:58:08.838908] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:35.227 [2024-12-16 05:58:08.838918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:35.227 [2024-12-16 05:58:08.838927] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:35.227 [2024-12-16 05:58:08.839319] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:35.227 [2024-12-16 05:58:08.839343] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xef20d0 0 00:29:35.227 [2024-12-16 05:58:08.853859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:35.227 [2024-12-16 05:58:08.853872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:35.227 [2024-12-16 05:58:08.853877] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:35.227 [2024-12-16 05:58:08.853880] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:35.227 [2024-12-16 05:58:08.853904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.853908] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.853911] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.227 [2024-12-16 05:58:08.853921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:35.227 [2024-12-16 05:58:08.853938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.227 [2024-12-16 05:58:08.861859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.227 [2024-12-16 05:58:08.861868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.227 [2024-12-16 05:58:08.861871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.861875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.227 [2024-12-16 05:58:08.861883] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:35.227 [2024-12-16 05:58:08.861888] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:35.227 [2024-12-16 05:58:08.861893] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:35.227 [2024-12-16 05:58:08.861903] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.861907] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.227 [2024-12-16 05:58:08.861910] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.861917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.228 [2024-12-16 05:58:08.861932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.862087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.862092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.862095] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862099] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.862103] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:35.228 [2024-12-16 05:58:08.862109] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:35.228 [2024-12-16 05:58:08.862115] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862118] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.862127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.228 [2024-12-16 05:58:08.862137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.862212] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.862218] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.862221] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862224] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.862228] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:35.228 [2024-12-16 05:58:08.862235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:35.228 [2024-12-16 05:58:08.862240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862243] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.862252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.228 [2024-12-16 05:58:08.862261] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.862325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.862330] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.862333] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.862340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:35.228 [2024-12-16 05:58:08.862348] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862354] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.862359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.228 [2024-12-16 05:58:08.862368] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.862434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.862439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.862442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862447] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.862450] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:35.228 [2024-12-16 05:58:08.862454] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:35.228 [2024-12-16 05:58:08.862461] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:35.228 [2024-12-16 05:58:08.862565] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:35.228 [2024-12-16 05:58:08.862569] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:35.228 [2024-12-16 05:58:08.862575] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.862587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.228 [2024-12-16 05:58:08.862597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.862669] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.862675] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.862678] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.862684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:35.228 [2024-12-16 05:58:08.862692] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862696] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.862704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.228 [2024-12-16 05:58:08.862714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.862773] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.862779] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.862782] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862785] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.862788] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:35.228 [2024-12-16 05:58:08.862792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:35.228 [2024-12-16 05:58:08.862799] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:35.228 [2024-12-16 05:58:08.862805] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:35.228 [2024-12-16 05:58:08.862812] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862816] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.862821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.228 [2024-12-16 05:58:08.862832] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.862920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.228 [2024-12-16 05:58:08.862925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.228 [2024-12-16 05:58:08.862928] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862931] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=4096, cccid=0 00:29:35.228 [2024-12-16 05:58:08.862935] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5c540) on tqpair(0xef20d0): expected_datao=0, payload_size=4096 00:29:35.228 [2024-12-16 05:58:08.862939] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862951] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862954] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.862992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.862995] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.862999] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.863004] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:35.228 [2024-12-16 05:58:08.863008] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:35.228 [2024-12-16 05:58:08.863012] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:35.228 [2024-12-16 05:58:08.863015] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:35.228 [2024-12-16 05:58:08.863019] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:35.228 [2024-12-16 05:58:08.863023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:35.228 [2024-12-16 05:58:08.863030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:35.228 [2024-12-16 05:58:08.863036] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.863039] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.863042] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.228 [2024-12-16 05:58:08.863048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:35.228 [2024-12-16 05:58:08.863058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.228 [2024-12-16 05:58:08.863119] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.228 [2024-12-16 05:58:08.863124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.228 [2024-12-16 05:58:08.863127] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.863130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.228 [2024-12-16 05:58:08.863136] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.228 [2024-12-16 05:58:08.863139] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863142] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.863147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.229 [2024-12-16 05:58:08.863152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863160] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.863165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.229 [2024-12-16 05:58:08.863170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863173] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.863180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.229 [2024-12-16 05:58:08.863185] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863188] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.863196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.229 [2024-12-16 05:58:08.863200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.863209] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.863214] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863217] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.863223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.229 [2024-12-16 05:58:08.863234] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c540, cid 0, qid 0 00:29:35.229 [2024-12-16 05:58:08.863239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c6c0, cid 1, qid 0 00:29:35.229 [2024-12-16 05:58:08.863243] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c840, cid 2, qid 0 00:29:35.229 [2024-12-16 05:58:08.863246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.229 [2024-12-16 05:58:08.863250] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cb40, cid 4, qid 0 00:29:35.229 [2024-12-16 05:58:08.863347] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.229 [2024-12-16 05:58:08.863353] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.229 [2024-12-16 05:58:08.863356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863359] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cb40) on tqpair=0xef20d0 00:29:35.229 [2024-12-16 05:58:08.863363] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:35.229 [2024-12-16 05:58:08.863368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.863375] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.863382] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.863387] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863391] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863394] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.863403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:35.229 [2024-12-16 05:58:08.863412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cb40, cid 4, qid 0 00:29:35.229 [2024-12-16 05:58:08.863476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.229 [2024-12-16 05:58:08.863481] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.229 [2024-12-16 05:58:08.863484] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863487] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cb40) on tqpair=0xef20d0 00:29:35.229 [2024-12-16 05:58:08.863537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.863546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.863552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.863561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.229 [2024-12-16 05:58:08.863571] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cb40, cid 4, qid 0 00:29:35.229 [2024-12-16 05:58:08.863646] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.229 [2024-12-16 05:58:08.863651] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.229 [2024-12-16 05:58:08.863654] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863657] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=4096, cccid=4 00:29:35.229 [2024-12-16 05:58:08.863661] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5cb40) on tqpair(0xef20d0): expected_datao=0, payload_size=4096 00:29:35.229 [2024-12-16 05:58:08.863664] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863677] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.863681] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.904976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.229 [2024-12-16 05:58:08.904988] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.229 [2024-12-16 05:58:08.904991] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.904994] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cb40) on tqpair=0xef20d0 00:29:35.229 [2024-12-16 05:58:08.905004] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:35.229 [2024-12-16 05:58:08.905014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.905022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.905028] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905031] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.905038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.229 [2024-12-16 05:58:08.905049] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cb40, cid 4, qid 0 00:29:35.229 [2024-12-16 05:58:08.905131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.229 [2024-12-16 05:58:08.905137] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.229 [2024-12-16 05:58:08.905139] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905145] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=4096, cccid=4 00:29:35.229 [2024-12-16 05:58:08.905149] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5cb40) on tqpair(0xef20d0): expected_datao=0, payload_size=4096 00:29:35.229 [2024-12-16 05:58:08.905152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905169] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905173] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905208] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.229 [2024-12-16 05:58:08.905214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.229 [2024-12-16 05:58:08.905217] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cb40) on tqpair=0xef20d0 00:29:35.229 [2024-12-16 05:58:08.905230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.905238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.905244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef20d0) 00:29:35.229 [2024-12-16 05:58:08.905253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.229 [2024-12-16 05:58:08.905262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cb40, cid 4, qid 0 00:29:35.229 [2024-12-16 05:58:08.905335] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.229 [2024-12-16 05:58:08.905341] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.229 [2024-12-16 05:58:08.905343] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905346] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=4096, cccid=4 00:29:35.229 [2024-12-16 05:58:08.905350] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5cb40) on tqpair(0xef20d0): expected_datao=0, payload_size=4096 00:29:35.229 [2024-12-16 05:58:08.905354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905366] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.905369] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.948857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.229 [2024-12-16 05:58:08.948867] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.229 [2024-12-16 05:58:08.948870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.229 [2024-12-16 05:58:08.948874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cb40) on tqpair=0xef20d0 00:29:35.229 [2024-12-16 05:58:08.948881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.948889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.948897] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.948903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:35.229 [2024-12-16 05:58:08.948907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:35.230 [2024-12-16 05:58:08.948912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:35.230 [2024-12-16 05:58:08.948918] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:35.230 [2024-12-16 05:58:08.948922] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:35.230 [2024-12-16 05:58:08.948927] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:35.230 [2024-12-16 05:58:08.948939] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.948942] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.948949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.948955] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.948958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.948960] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.948965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:35.230 [2024-12-16 05:58:08.948977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cb40, cid 4, qid 0 00:29:35.230 [2024-12-16 05:58:08.948982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5ccc0, cid 5, qid 0 00:29:35.230 [2024-12-16 05:58:08.949066] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949071] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949074] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949077] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cb40) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949082] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5ccc0) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949102] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949105] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.949110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.949120] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5ccc0, cid 5, qid 0 00:29:35.230 [2024-12-16 05:58:08.949191] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949197] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949200] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949203] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5ccc0) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.949220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.949229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5ccc0, cid 5, qid 0 00:29:35.230 [2024-12-16 05:58:08.949300] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949305] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5ccc0) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949322] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949326] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.949331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.949340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5ccc0, cid 5, qid 0 00:29:35.230 [2024-12-16 05:58:08.949400] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949408] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949411] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5ccc0) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949428] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.949433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.949438] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.949446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.949452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949455] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.949460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.949467] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949471] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xef20d0) 00:29:35.230 [2024-12-16 05:58:08.949476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.230 [2024-12-16 05:58:08.949486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5ccc0, cid 5, qid 0 00:29:35.230 [2024-12-16 05:58:08.949490] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cb40, cid 4, qid 0 00:29:35.230 [2024-12-16 05:58:08.949494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5ce40, cid 6, qid 0 00:29:35.230 [2024-12-16 05:58:08.949498] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cfc0, cid 7, qid 0 00:29:35.230 [2024-12-16 05:58:08.949633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.230 [2024-12-16 05:58:08.949639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.230 [2024-12-16 05:58:08.949642] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949645] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=8192, cccid=5 00:29:35.230 [2024-12-16 05:58:08.949649] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5ccc0) on tqpair(0xef20d0): expected_datao=0, payload_size=8192 00:29:35.230 [2024-12-16 05:58:08.949652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949666] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949670] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949676] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.230 [2024-12-16 05:58:08.949681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.230 [2024-12-16 05:58:08.949684] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949687] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=512, cccid=4 00:29:35.230 [2024-12-16 05:58:08.949690] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5cb40) on tqpair(0xef20d0): expected_datao=0, payload_size=512 00:29:35.230 [2024-12-16 05:58:08.949694] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949699] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949702] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949707] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.230 [2024-12-16 05:58:08.949711] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.230 [2024-12-16 05:58:08.949714] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949717] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=512, cccid=6 00:29:35.230 [2024-12-16 05:58:08.949720] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5ce40) on tqpair(0xef20d0): expected_datao=0, payload_size=512 00:29:35.230 [2024-12-16 05:58:08.949724] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949729] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949732] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:35.230 [2024-12-16 05:58:08.949741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:35.230 [2024-12-16 05:58:08.949744] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949747] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef20d0): datao=0, datal=4096, cccid=7 00:29:35.230 [2024-12-16 05:58:08.949751] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf5cfc0) on tqpair(0xef20d0): expected_datao=0, payload_size=4096 00:29:35.230 [2024-12-16 05:58:08.949754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949760] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949763] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949770] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949774] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949777] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5ccc0) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949789] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949794] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949797] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949800] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cb40) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949808] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949813] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.230 [2024-12-16 05:58:08.949815] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.230 [2024-12-16 05:58:08.949819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5ce40) on tqpair=0xef20d0 00:29:35.230 [2024-12-16 05:58:08.949824] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.230 [2024-12-16 05:58:08.949829] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.231 [2024-12-16 05:58:08.949832] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.231 [2024-12-16 05:58:08.949838] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cfc0) on tqpair=0xef20d0 00:29:35.231 ===================================================== 00:29:35.231 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.231 ===================================================== 00:29:35.231 Controller Capabilities/Features 00:29:35.231 ================================ 00:29:35.231 Vendor ID: 8086 00:29:35.231 Subsystem Vendor ID: 8086 00:29:35.231 Serial Number: SPDK00000000000001 00:29:35.231 Model Number: SPDK bdev Controller 00:29:35.231 Firmware Version: 24.09.1 00:29:35.231 Recommended Arb Burst: 6 00:29:35.231 IEEE OUI Identifier: e4 d2 5c 00:29:35.231 Multi-path I/O 00:29:35.231 May have multiple subsystem ports: Yes 00:29:35.231 May have multiple controllers: Yes 00:29:35.231 Associated with SR-IOV VF: No 00:29:35.231 Max Data Transfer Size: 131072 00:29:35.231 Max Number of Namespaces: 32 00:29:35.231 Max Number of I/O Queues: 127 00:29:35.231 NVMe Specification Version (VS): 1.3 00:29:35.231 NVMe Specification Version (Identify): 1.3 00:29:35.231 Maximum Queue Entries: 128 00:29:35.231 Contiguous Queues Required: Yes 00:29:35.231 Arbitration Mechanisms Supported 00:29:35.231 Weighted Round Robin: Not Supported 00:29:35.231 Vendor Specific: Not Supported 00:29:35.231 Reset Timeout: 15000 ms 00:29:35.231 Doorbell Stride: 4 bytes 00:29:35.231 NVM Subsystem Reset: Not Supported 00:29:35.231 Command Sets Supported 00:29:35.231 NVM Command Set: Supported 00:29:35.231 Boot Partition: Not Supported 00:29:35.231 Memory Page Size Minimum: 4096 bytes 00:29:35.231 Memory Page Size Maximum: 4096 bytes 00:29:35.231 Persistent Memory Region: Not Supported 00:29:35.231 Optional Asynchronous Events Supported 00:29:35.231 Namespace Attribute Notices: Supported 00:29:35.231 Firmware Activation Notices: Not Supported 00:29:35.231 ANA Change Notices: Not Supported 00:29:35.231 PLE Aggregate Log Change Notices: Not Supported 00:29:35.231 LBA Status Info Alert Notices: Not Supported 00:29:35.231 EGE Aggregate Log Change Notices: Not Supported 00:29:35.231 Normal NVM Subsystem Shutdown event: Not Supported 00:29:35.231 Zone Descriptor Change Notices: Not Supported 00:29:35.231 Discovery Log Change Notices: Not Supported 00:29:35.231 Controller Attributes 00:29:35.231 128-bit Host Identifier: Supported 00:29:35.231 Non-Operational Permissive Mode: Not Supported 00:29:35.231 NVM Sets: Not Supported 00:29:35.231 Read Recovery Levels: Not Supported 00:29:35.231 Endurance Groups: Not Supported 00:29:35.231 Predictable Latency Mode: Not Supported 00:29:35.231 Traffic Based Keep ALive: Not Supported 00:29:35.231 Namespace Granularity: Not Supported 00:29:35.231 SQ Associations: Not Supported 00:29:35.231 UUID List: Not Supported 00:29:35.231 Multi-Domain Subsystem: Not Supported 00:29:35.231 Fixed Capacity Management: Not Supported 00:29:35.231 Variable Capacity Management: Not Supported 00:29:35.231 Delete Endurance Group: Not Supported 00:29:35.231 Delete NVM Set: Not Supported 00:29:35.231 Extended LBA Formats Supported: Not Supported 00:29:35.231 Flexible Data Placement Supported: Not Supported 00:29:35.231 00:29:35.231 Controller Memory Buffer Support 00:29:35.231 ================================ 00:29:35.231 Supported: No 00:29:35.231 00:29:35.231 Persistent Memory Region Support 00:29:35.231 ================================ 00:29:35.231 Supported: No 00:29:35.231 00:29:35.231 Admin Command Set Attributes 00:29:35.231 ============================ 00:29:35.231 Security Send/Receive: Not Supported 00:29:35.231 Format NVM: Not Supported 00:29:35.231 Firmware Activate/Download: Not Supported 00:29:35.231 Namespace Management: Not Supported 00:29:35.231 Device Self-Test: Not Supported 00:29:35.231 Directives: Not Supported 00:29:35.231 NVMe-MI: Not Supported 00:29:35.231 Virtualization Management: Not Supported 00:29:35.231 Doorbell Buffer Config: Not Supported 00:29:35.231 Get LBA Status Capability: Not Supported 00:29:35.231 Command & Feature Lockdown Capability: Not Supported 00:29:35.231 Abort Command Limit: 4 00:29:35.231 Async Event Request Limit: 4 00:29:35.231 Number of Firmware Slots: N/A 00:29:35.231 Firmware Slot 1 Read-Only: N/A 00:29:35.231 Firmware Activation Without Reset: N/A 00:29:35.231 Multiple Update Detection Support: N/A 00:29:35.231 Firmware Update Granularity: No Information Provided 00:29:35.231 Per-Namespace SMART Log: No 00:29:35.231 Asymmetric Namespace Access Log Page: Not Supported 00:29:35.231 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:35.231 Command Effects Log Page: Supported 00:29:35.231 Get Log Page Extended Data: Supported 00:29:35.231 Telemetry Log Pages: Not Supported 00:29:35.231 Persistent Event Log Pages: Not Supported 00:29:35.231 Supported Log Pages Log Page: May Support 00:29:35.231 Commands Supported & Effects Log Page: Not Supported 00:29:35.231 Feature Identifiers & Effects Log Page:May Support 00:29:35.231 NVMe-MI Commands & Effects Log Page: May Support 00:29:35.231 Data Area 4 for Telemetry Log: Not Supported 00:29:35.231 Error Log Page Entries Supported: 128 00:29:35.231 Keep Alive: Supported 00:29:35.231 Keep Alive Granularity: 10000 ms 00:29:35.231 00:29:35.231 NVM Command Set Attributes 00:29:35.231 ========================== 00:29:35.231 Submission Queue Entry Size 00:29:35.231 Max: 64 00:29:35.231 Min: 64 00:29:35.231 Completion Queue Entry Size 00:29:35.231 Max: 16 00:29:35.231 Min: 16 00:29:35.231 Number of Namespaces: 32 00:29:35.231 Compare Command: Supported 00:29:35.231 Write Uncorrectable Command: Not Supported 00:29:35.231 Dataset Management Command: Supported 00:29:35.231 Write Zeroes Command: Supported 00:29:35.231 Set Features Save Field: Not Supported 00:29:35.231 Reservations: Supported 00:29:35.231 Timestamp: Not Supported 00:29:35.231 Copy: Supported 00:29:35.231 Volatile Write Cache: Present 00:29:35.231 Atomic Write Unit (Normal): 1 00:29:35.231 Atomic Write Unit (PFail): 1 00:29:35.231 Atomic Compare & Write Unit: 1 00:29:35.231 Fused Compare & Write: Supported 00:29:35.231 Scatter-Gather List 00:29:35.231 SGL Command Set: Supported 00:29:35.231 SGL Keyed: Supported 00:29:35.231 SGL Bit Bucket Descriptor: Not Supported 00:29:35.231 SGL Metadata Pointer: Not Supported 00:29:35.231 Oversized SGL: Not Supported 00:29:35.231 SGL Metadata Address: Not Supported 00:29:35.231 SGL Offset: Supported 00:29:35.231 Transport SGL Data Block: Not Supported 00:29:35.231 Replay Protected Memory Block: Not Supported 00:29:35.231 00:29:35.231 Firmware Slot Information 00:29:35.231 ========================= 00:29:35.231 Active slot: 1 00:29:35.231 Slot 1 Firmware Revision: 24.09.1 00:29:35.231 00:29:35.231 00:29:35.231 Commands Supported and Effects 00:29:35.231 ============================== 00:29:35.231 Admin Commands 00:29:35.231 -------------- 00:29:35.231 Get Log Page (02h): Supported 00:29:35.231 Identify (06h): Supported 00:29:35.231 Abort (08h): Supported 00:29:35.231 Set Features (09h): Supported 00:29:35.231 Get Features (0Ah): Supported 00:29:35.231 Asynchronous Event Request (0Ch): Supported 00:29:35.231 Keep Alive (18h): Supported 00:29:35.231 I/O Commands 00:29:35.231 ------------ 00:29:35.231 Flush (00h): Supported LBA-Change 00:29:35.231 Write (01h): Supported LBA-Change 00:29:35.231 Read (02h): Supported 00:29:35.231 Compare (05h): Supported 00:29:35.231 Write Zeroes (08h): Supported LBA-Change 00:29:35.231 Dataset Management (09h): Supported LBA-Change 00:29:35.231 Copy (19h): Supported LBA-Change 00:29:35.231 00:29:35.231 Error Log 00:29:35.231 ========= 00:29:35.231 00:29:35.231 Arbitration 00:29:35.231 =========== 00:29:35.231 Arbitration Burst: 1 00:29:35.231 00:29:35.231 Power Management 00:29:35.231 ================ 00:29:35.231 Number of Power States: 1 00:29:35.231 Current Power State: Power State #0 00:29:35.231 Power State #0: 00:29:35.231 Max Power: 0.00 W 00:29:35.231 Non-Operational State: Operational 00:29:35.232 Entry Latency: Not Reported 00:29:35.232 Exit Latency: Not Reported 00:29:35.232 Relative Read Throughput: 0 00:29:35.232 Relative Read Latency: 0 00:29:35.232 Relative Write Throughput: 0 00:29:35.232 Relative Write Latency: 0 00:29:35.232 Idle Power: Not Reported 00:29:35.232 Active Power: Not Reported 00:29:35.232 Non-Operational Permissive Mode: Not Supported 00:29:35.232 00:29:35.232 Health Information 00:29:35.232 ================== 00:29:35.232 Critical Warnings: 00:29:35.232 Available Spare Space: OK 00:29:35.232 Temperature: OK 00:29:35.232 Device Reliability: OK 00:29:35.232 Read Only: No 00:29:35.232 Volatile Memory Backup: OK 00:29:35.232 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:35.232 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:35.232 Available Spare: 0% 00:29:35.232 Available Spare Threshold: 0% 00:29:35.232 Life Percentage U[2024-12-16 05:58:08.949924] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.949928] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.949933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.949944] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5cfc0, cid 7, qid 0 00:29:35.232 [2024-12-16 05:58:08.950024] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950029] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950032] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950035] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5cfc0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950061] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:35.232 [2024-12-16 05:58:08.950070] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c540) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.232 [2024-12-16 05:58:08.950080] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c6c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.232 [2024-12-16 05:58:08.950088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c840) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.232 [2024-12-16 05:58:08.950095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.232 [2024-12-16 05:58:08.950106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950109] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950128] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950203] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950205] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950208] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950237] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950315] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950318] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950323] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950326] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:35.232 [2024-12-16 05:58:08.950330] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:35.232 [2024-12-16 05:58:08.950338] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950341] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950428] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950431] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950439] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950442] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950445] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950459] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950526] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950528] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950532] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950539] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950543] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950546] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950630] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950638] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950641] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950644] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950720] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950727] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950730] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950733] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950744] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950761] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950833] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950839] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950841] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950858] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950864] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950879] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.950943] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.232 [2024-12-16 05:58:08.950949] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.232 [2024-12-16 05:58:08.950951] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.232 [2024-12-16 05:58:08.950962] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950965] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.232 [2024-12-16 05:58:08.950968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.232 [2024-12-16 05:58:08.950974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.232 [2024-12-16 05:58:08.950982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.232 [2024-12-16 05:58:08.951056] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951062] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951064] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951068] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951076] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951079] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951160] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951169] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951173] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951184] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951201] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951264] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951272] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951275] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951283] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951286] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951304] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951363] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951371] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951374] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951388] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951402] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951490] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951494] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951497] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951511] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951571] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951579] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951582] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951595] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951612] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951695] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951706] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951709] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951723] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951781] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951787] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951790] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951793] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951800] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951806] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951821] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.951899] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.951904] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.951907] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.951919] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.951925] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.951930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.951939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.952000] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.952006] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.952008] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.952021] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.952032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.952041] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.952112] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.952117] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.952120] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952123] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.952132] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952135] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952138] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.952143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.952153] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.952220] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.952225] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.952228] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952231] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.233 [2024-12-16 05:58:08.952239] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.233 [2024-12-16 05:58:08.952245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.233 [2024-12-16 05:58:08.952250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.233 [2024-12-16 05:58:08.952259] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.233 [2024-12-16 05:58:08.952321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.233 [2024-12-16 05:58:08.952326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.233 [2024-12-16 05:58:08.952329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.234 [2024-12-16 05:58:08.952340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952346] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.234 [2024-12-16 05:58:08.952351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.234 [2024-12-16 05:58:08.952361] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.234 [2024-12-16 05:58:08.952435] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.234 [2024-12-16 05:58:08.952441] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.234 [2024-12-16 05:58:08.952443] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952447] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.234 [2024-12-16 05:58:08.952455] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952459] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952465] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.234 [2024-12-16 05:58:08.952470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.234 [2024-12-16 05:58:08.952480] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.234 [2024-12-16 05:58:08.952538] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.234 [2024-12-16 05:58:08.952543] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.234 [2024-12-16 05:58:08.952546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952549] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.234 [2024-12-16 05:58:08.952557] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952563] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.234 [2024-12-16 05:58:08.952569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.234 [2024-12-16 05:58:08.952577] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.234 [2024-12-16 05:58:08.952638] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.234 [2024-12-16 05:58:08.952643] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.234 [2024-12-16 05:58:08.952646] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.234 [2024-12-16 05:58:08.952657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.234 [2024-12-16 05:58:08.952669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.234 [2024-12-16 05:58:08.952677] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.234 [2024-12-16 05:58:08.952750] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.234 [2024-12-16 05:58:08.952755] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.234 [2024-12-16 05:58:08.952757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952761] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.234 [2024-12-16 05:58:08.952769] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.952775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.234 [2024-12-16 05:58:08.952781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.234 [2024-12-16 05:58:08.952790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.234 [2024-12-16 05:58:08.956856] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.234 [2024-12-16 05:58:08.956864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.234 [2024-12-16 05:58:08.956867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.956870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.234 [2024-12-16 05:58:08.956879] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.956882] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.956885] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef20d0) 00:29:35.234 [2024-12-16 05:58:08.956893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.234 [2024-12-16 05:58:08.956904] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf5c9c0, cid 3, qid 0 00:29:35.234 [2024-12-16 05:58:08.956969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:35.234 [2024-12-16 05:58:08.956974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:35.234 [2024-12-16 05:58:08.956977] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:35.234 [2024-12-16 05:58:08.956980] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf5c9c0) on tqpair=0xef20d0 00:29:35.234 [2024-12-16 05:58:08.956986] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:29:35.234 sed: 0% 00:29:35.234 Data Units Read: 0 00:29:35.234 Data Units Written: 0 00:29:35.234 Host Read Commands: 0 00:29:35.234 Host Write Commands: 0 00:29:35.234 Controller Busy Time: 0 minutes 00:29:35.234 Power Cycles: 0 00:29:35.234 Power On Hours: 0 hours 00:29:35.234 Unsafe Shutdowns: 0 00:29:35.234 Unrecoverable Media Errors: 0 00:29:35.234 Lifetime Error Log Entries: 0 00:29:35.234 Warning Temperature Time: 0 minutes 00:29:35.234 Critical Temperature Time: 0 minutes 00:29:35.234 00:29:35.234 Number of Queues 00:29:35.234 ================ 00:29:35.234 Number of I/O Submission Queues: 127 00:29:35.234 Number of I/O Completion Queues: 127 00:29:35.234 00:29:35.234 Active Namespaces 00:29:35.234 ================= 00:29:35.234 Namespace ID:1 00:29:35.234 Error Recovery Timeout: Unlimited 00:29:35.234 Command Set Identifier: NVM (00h) 00:29:35.234 Deallocate: Supported 00:29:35.234 Deallocated/Unwritten Error: Not Supported 00:29:35.234 Deallocated Read Value: Unknown 00:29:35.234 Deallocate in Write Zeroes: Not Supported 00:29:35.234 Deallocated Guard Field: 0xFFFF 00:29:35.234 Flush: Supported 00:29:35.234 Reservation: Supported 00:29:35.234 Namespace Sharing Capabilities: Multiple Controllers 00:29:35.234 Size (in LBAs): 131072 (0GiB) 00:29:35.234 Capacity (in LBAs): 131072 (0GiB) 00:29:35.234 Utilization (in LBAs): 131072 (0GiB) 00:29:35.234 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:35.234 EUI64: ABCDEF0123456789 00:29:35.234 UUID: 0a20d417-b447-42fc-a15f-14edbd35cf7c 00:29:35.234 Thin Provisioning: Not Supported 00:29:35.234 Per-NS Atomic Units: Yes 00:29:35.234 Atomic Boundary Size (Normal): 0 00:29:35.234 Atomic Boundary Size (PFail): 0 00:29:35.234 Atomic Boundary Offset: 0 00:29:35.234 Maximum Single Source Range Length: 65535 00:29:35.234 Maximum Copy Length: 65535 00:29:35.234 Maximum Source Range Count: 1 00:29:35.234 NGUID/EUI64 Never Reused: No 00:29:35.234 Namespace Write Protected: No 00:29:35.234 Number of LBA Formats: 1 00:29:35.234 Current LBA Format: LBA Format #00 00:29:35.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:35.234 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.234 05:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.234 rmmod nvme_tcp 00:29:35.234 rmmod nvme_fabrics 00:29:35.234 rmmod nvme_keyring 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 3487908 ']' 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 3487908 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3487908 ']' 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3487908 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.234 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3487908 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3487908' 00:29:35.493 killing process with pid 3487908 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3487908 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3487908 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:29:35.493 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:35.494 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:35.494 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:35.494 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.494 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.494 05:58:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.024 05:58:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.024 00:29:38.024 real 0m9.324s 00:29:38.024 user 0m5.669s 00:29:38.024 sys 0m4.805s 00:29:38.024 05:58:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:38.024 05:58:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:38.024 ************************************ 00:29:38.024 END TEST nvmf_identify 00:29:38.024 ************************************ 00:29:38.024 05:58:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.025 ************************************ 00:29:38.025 START TEST nvmf_perf 00:29:38.025 ************************************ 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:38.025 * Looking for test storage... 00:29:38.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.025 --rc genhtml_branch_coverage=1 00:29:38.025 --rc genhtml_function_coverage=1 00:29:38.025 --rc genhtml_legend=1 00:29:38.025 --rc geninfo_all_blocks=1 00:29:38.025 --rc geninfo_unexecuted_blocks=1 00:29:38.025 00:29:38.025 ' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.025 --rc genhtml_branch_coverage=1 00:29:38.025 --rc genhtml_function_coverage=1 00:29:38.025 --rc genhtml_legend=1 00:29:38.025 --rc geninfo_all_blocks=1 00:29:38.025 --rc geninfo_unexecuted_blocks=1 00:29:38.025 00:29:38.025 ' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.025 --rc genhtml_branch_coverage=1 00:29:38.025 --rc genhtml_function_coverage=1 00:29:38.025 --rc genhtml_legend=1 00:29:38.025 --rc geninfo_all_blocks=1 00:29:38.025 --rc geninfo_unexecuted_blocks=1 00:29:38.025 00:29:38.025 ' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.025 --rc genhtml_branch_coverage=1 00:29:38.025 --rc genhtml_function_coverage=1 00:29:38.025 --rc genhtml_legend=1 00:29:38.025 --rc geninfo_all_blocks=1 00:29:38.025 --rc geninfo_unexecuted_blocks=1 00:29:38.025 00:29:38.025 ' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:38.025 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.026 05:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:43.290 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:43.290 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:43.290 Found net devices under 0000:af:00.0: cvl_0_0 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:43.290 Found net devices under 0000:af:00.1: cvl_0_1 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.290 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:29:43.291 00:29:43.291 --- 10.0.0.2 ping statistics --- 00:29:43.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.291 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:29:43.291 00:29:43.291 --- 10.0.0.1 ping statistics --- 00:29:43.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.291 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=3491400 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 3491400 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3491400 ']' 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.291 05:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:43.291 [2024-12-16 05:58:16.873980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:43.291 [2024-12-16 05:58:16.874022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.291 [2024-12-16 05:58:16.931540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.291 [2024-12-16 05:58:16.971965] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.291 [2024-12-16 05:58:16.972003] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.291 [2024-12-16 05:58:16.972010] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.291 [2024-12-16 05:58:16.972016] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.291 [2024-12-16 05:58:16.972022] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.291 [2024-12-16 05:58:16.972066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.291 [2024-12-16 05:58:16.972143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.291 [2024-12-16 05:58:16.972233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.291 [2024-12-16 05:58:16.972234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:43.291 05:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:46.564 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:46.564 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:46.564 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:46.564 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:46.821 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:46.821 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:46.821 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:46.821 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:46.821 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:47.077 [2024-12-16 05:58:20.749241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.077 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.334 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:47.334 05:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.334 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:47.334 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:47.591 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.847 [2024-12-16 05:58:21.557506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.847 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.103 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:48.103 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:48.103 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:48.103 05:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:49.469 Initializing NVMe Controllers 00:29:49.469 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:49.469 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:49.469 Initialization complete. Launching workers. 00:29:49.469 ======================================================== 00:29:49.469 Latency(us) 00:29:49.469 Device Information : IOPS MiB/s Average min max 00:29:49.469 PCIE (0000:5e:00.0) NSID 1 from core 0: 99908.64 390.27 319.93 33.96 4254.63 00:29:49.469 ======================================================== 00:29:49.469 Total : 99908.64 390.27 319.93 33.96 4254.63 00:29:49.469 00:29:49.469 05:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.838 Initializing NVMe Controllers 00:29:50.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:50.838 Initialization complete. Launching workers. 00:29:50.838 ======================================================== 00:29:50.838 Latency(us) 00:29:50.838 Device Information : IOPS MiB/s Average min max 00:29:50.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 290.00 1.13 3554.65 121.72 45641.16 00:29:50.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19701.52 7229.72 47888.51 00:29:50.838 ======================================================== 00:29:50.839 Total : 341.00 1.33 5969.58 121.72 47888.51 00:29:50.839 00:29:50.839 05:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.768 Initializing NVMe Controllers 00:29:51.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:51.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:51.768 Initialization complete. Launching workers. 00:29:51.768 ======================================================== 00:29:51.768 Latency(us) 00:29:51.768 Device Information : IOPS MiB/s Average min max 00:29:51.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11196.99 43.74 2857.83 320.19 6796.77 00:29:51.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3884.00 15.17 8280.94 6414.13 15815.50 00:29:51.768 ======================================================== 00:29:51.768 Total : 15080.98 58.91 4254.51 320.19 15815.50 00:29:51.768 00:29:51.768 05:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:51.768 05:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:51.768 05:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.291 Initializing NVMe Controllers 00:29:54.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.291 Controller IO queue size 128, less than required. 00:29:54.291 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.291 Controller IO queue size 128, less than required. 00:29:54.291 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:54.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:54.291 Initialization complete. Launching workers. 00:29:54.291 ======================================================== 00:29:54.291 Latency(us) 00:29:54.291 Device Information : IOPS MiB/s Average min max 00:29:54.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1836.48 459.12 70824.31 46711.65 131044.99 00:29:54.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 563.99 141.00 229440.91 66072.57 352070.17 00:29:54.291 ======================================================== 00:29:54.291 Total : 2400.47 600.12 108091.44 46711.65 352070.17 00:29:54.291 00:29:54.547 05:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:54.547 No valid NVMe controllers or AIO or URING devices found 00:29:54.547 Initializing NVMe Controllers 00:29:54.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.547 Controller IO queue size 128, less than required. 00:29:54.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.547 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:54.547 Controller IO queue size 128, less than required. 00:29:54.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.547 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:54.547 WARNING: Some requested NVMe devices were skipped 00:29:54.547 05:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:57.822 Initializing NVMe Controllers 00:29:57.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.822 Controller IO queue size 128, less than required. 00:29:57.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.822 Controller IO queue size 128, less than required. 00:29:57.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.822 Initialization complete. Launching workers. 00:29:57.822 00:29:57.822 ==================== 00:29:57.822 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:57.822 TCP transport: 00:29:57.822 polls: 12683 00:29:57.822 idle_polls: 9021 00:29:57.822 sock_completions: 3662 00:29:57.822 nvme_completions: 6245 00:29:57.822 submitted_requests: 9382 00:29:57.822 queued_requests: 1 00:29:57.822 00:29:57.822 ==================== 00:29:57.822 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:57.822 TCP transport: 00:29:57.822 polls: 16674 00:29:57.822 idle_polls: 12233 00:29:57.822 sock_completions: 4441 00:29:57.822 nvme_completions: 6769 00:29:57.822 submitted_requests: 10252 00:29:57.822 queued_requests: 1 00:29:57.822 ======================================================== 00:29:57.822 Latency(us) 00:29:57.822 Device Information : IOPS MiB/s Average min max 00:29:57.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1559.96 389.99 84567.40 41803.72 150444.00 00:29:57.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1690.88 422.72 76343.98 41760.70 118214.03 00:29:57.822 ======================================================== 00:29:57.822 Total : 3250.84 812.71 80290.11 41760.70 150444.00 00:29:57.822 00:29:57.822 05:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:57.822 05:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.822 05:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:57.822 05:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:57.822 05:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=086474ad-b336-4ba7-9a4b-d977a6d57479 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 086474ad-b336-4ba7-9a4b-d977a6d57479 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=086474ad-b336-4ba7-9a4b-d977a6d57479 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:01.233 { 00:30:01.233 "uuid": "086474ad-b336-4ba7-9a4b-d977a6d57479", 00:30:01.233 "name": "lvs_0", 00:30:01.233 "base_bdev": "Nvme0n1", 00:30:01.233 "total_data_clusters": 238234, 00:30:01.233 "free_clusters": 238234, 00:30:01.233 "block_size": 512, 00:30:01.233 "cluster_size": 4194304 00:30:01.233 } 00:30:01.233 ]' 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="086474ad-b336-4ba7-9a4b-d977a6d57479") .free_clusters' 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="086474ad-b336-4ba7-9a4b-d977a6d57479") .cluster_size' 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:01.233 952936 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:01.233 05:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 086474ad-b336-4ba7-9a4b-d977a6d57479 lbd_0 20480 00:30:01.498 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=903c965a-51b3-46cc-bb27-26a15a1ed64d 00:30:01.498 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 903c965a-51b3-46cc-bb27-26a15a1ed64d lvs_n_0 00:30:02.067 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d0f3423d-fb5e-4c5a-b9da-e7da5779b737 00:30:02.067 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d0f3423d-fb5e-4c5a-b9da-e7da5779b737 00:30:02.067 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d0f3423d-fb5e-4c5a-b9da-e7da5779b737 00:30:02.067 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:02.067 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:02.067 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:02.067 05:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:02.323 { 00:30:02.323 "uuid": "086474ad-b336-4ba7-9a4b-d977a6d57479", 00:30:02.323 "name": "lvs_0", 00:30:02.323 "base_bdev": "Nvme0n1", 00:30:02.323 "total_data_clusters": 238234, 00:30:02.323 "free_clusters": 233114, 00:30:02.323 "block_size": 512, 00:30:02.323 "cluster_size": 4194304 00:30:02.323 }, 00:30:02.323 { 00:30:02.323 "uuid": "d0f3423d-fb5e-4c5a-b9da-e7da5779b737", 00:30:02.323 "name": "lvs_n_0", 00:30:02.323 "base_bdev": "903c965a-51b3-46cc-bb27-26a15a1ed64d", 00:30:02.323 "total_data_clusters": 5114, 00:30:02.323 "free_clusters": 5114, 00:30:02.323 "block_size": 512, 00:30:02.323 "cluster_size": 4194304 00:30:02.323 } 00:30:02.323 ]' 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d0f3423d-fb5e-4c5a-b9da-e7da5779b737") .free_clusters' 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d0f3423d-fb5e-4c5a-b9da-e7da5779b737") .cluster_size' 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:02.323 20456 00:30:02.323 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:02.580 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d0f3423d-fb5e-4c5a-b9da-e7da5779b737 lbd_nest_0 20456 00:30:02.580 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=27332526-273e-4839-bcde-58f42ab13ca4 00:30:02.580 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:02.837 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:02.837 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 27332526-273e-4839-bcde-58f42ab13ca4 00:30:03.094 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.350 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:03.351 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:03.351 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:03.351 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:03.351 05:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.530 Initializing NVMe Controllers 00:30:15.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.530 Initialization complete. Launching workers. 00:30:15.530 ======================================================== 00:30:15.530 Latency(us) 00:30:15.530 Device Information : IOPS MiB/s Average min max 00:30:15.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.78 0.02 20140.47 129.49 45811.89 00:30:15.530 ======================================================== 00:30:15.530 Total : 49.78 0.02 20140.47 129.49 45811.89 00:30:15.530 00:30:15.530 05:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.530 05:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.492 Initializing NVMe Controllers 00:30:25.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.492 Initialization complete. Launching workers. 00:30:25.492 ======================================================== 00:30:25.492 Latency(us) 00:30:25.492 Device Information : IOPS MiB/s Average min max 00:30:25.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.70 7.84 15960.19 7042.24 48863.68 00:30:25.492 ======================================================== 00:30:25.492 Total : 62.70 7.84 15960.19 7042.24 48863.68 00:30:25.492 00:30:25.492 05:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:25.492 05:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:25.492 05:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.455 Initializing NVMe Controllers 00:30:35.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.455 Initialization complete. Launching workers. 00:30:35.455 ======================================================== 00:30:35.455 Latency(us) 00:30:35.455 Device Information : IOPS MiB/s Average min max 00:30:35.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8579.50 4.19 3730.87 243.67 10153.99 00:30:35.455 ======================================================== 00:30:35.455 Total : 8579.50 4.19 3730.87 243.67 10153.99 00:30:35.455 00:30:35.455 05:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:35.455 05:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:45.424 Initializing NVMe Controllers 00:30:45.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.424 Initialization complete. Launching workers. 00:30:45.424 ======================================================== 00:30:45.424 Latency(us) 00:30:45.424 Device Information : IOPS MiB/s Average min max 00:30:45.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4239.00 529.88 7555.56 758.13 18875.24 00:30:45.424 ======================================================== 00:30:45.424 Total : 4239.00 529.88 7555.56 758.13 18875.24 00:30:45.424 00:30:45.424 05:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:45.424 05:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:45.424 05:59:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:55.385 Initializing NVMe Controllers 00:30:55.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.385 Controller IO queue size 128, less than required. 00:30:55.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:55.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:55.385 Initialization complete. Launching workers. 00:30:55.385 ======================================================== 00:30:55.385 Latency(us) 00:30:55.385 Device Information : IOPS MiB/s Average min max 00:30:55.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15864.86 7.75 8073.00 1376.89 21957.12 00:30:55.385 ======================================================== 00:30:55.385 Total : 15864.86 7.75 8073.00 1376.89 21957.12 00:30:55.385 00:30:55.385 05:59:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:55.385 05:59:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:05.341 Initializing NVMe Controllers 00:31:05.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.341 Controller IO queue size 128, less than required. 00:31:05.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:05.341 Initialization complete. Launching workers. 00:31:05.341 ======================================================== 00:31:05.341 Latency(us) 00:31:05.341 Device Information : IOPS MiB/s Average min max 00:31:05.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.17 150.77 106539.40 16082.73 217862.68 00:31:05.341 ======================================================== 00:31:05.341 Total : 1206.17 150.77 106539.40 16082.73 217862.68 00:31:05.341 00:31:05.341 05:59:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.341 05:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 27332526-273e-4839-bcde-58f42ab13ca4 00:31:06.274 05:59:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:06.274 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 903c965a-51b3-46cc-bb27-26a15a1ed64d 00:31:06.533 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.790 rmmod nvme_tcp 00:31:06.790 rmmod nvme_fabrics 00:31:06.790 rmmod nvme_keyring 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.790 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 3491400 ']' 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 3491400 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3491400 ']' 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3491400 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3491400 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3491400' 00:31:06.791 killing process with pid 3491400 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3491400 00:31:06.791 05:59:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3491400 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.699 05:59:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.604 00:31:10.604 real 1m32.735s 00:31:10.604 user 5m32.588s 00:31:10.604 sys 0m17.051s 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:10.604 ************************************ 00:31:10.604 END TEST nvmf_perf 00:31:10.604 ************************************ 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.604 ************************************ 00:31:10.604 START TEST nvmf_fio_host 00:31:10.604 ************************************ 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:10.604 * Looking for test storage... 00:31:10.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.604 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.605 --rc genhtml_branch_coverage=1 00:31:10.605 --rc genhtml_function_coverage=1 00:31:10.605 --rc genhtml_legend=1 00:31:10.605 --rc geninfo_all_blocks=1 00:31:10.605 --rc geninfo_unexecuted_blocks=1 00:31:10.605 00:31:10.605 ' 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.605 --rc genhtml_branch_coverage=1 00:31:10.605 --rc genhtml_function_coverage=1 00:31:10.605 --rc genhtml_legend=1 00:31:10.605 --rc geninfo_all_blocks=1 00:31:10.605 --rc geninfo_unexecuted_blocks=1 00:31:10.605 00:31:10.605 ' 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.605 --rc genhtml_branch_coverage=1 00:31:10.605 --rc genhtml_function_coverage=1 00:31:10.605 --rc genhtml_legend=1 00:31:10.605 --rc geninfo_all_blocks=1 00:31:10.605 --rc geninfo_unexecuted_blocks=1 00:31:10.605 00:31:10.605 ' 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:10.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.605 --rc genhtml_branch_coverage=1 00:31:10.605 --rc genhtml_function_coverage=1 00:31:10.605 --rc genhtml_legend=1 00:31:10.605 --rc geninfo_all_blocks=1 00:31:10.605 --rc geninfo_unexecuted_blocks=1 00:31:10.605 00:31:10.605 ' 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.605 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:10.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.606 05:59:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:15.870 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:15.871 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:15.871 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:15.871 Found net devices under 0000:af:00.0: cvl_0_0 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:15.871 Found net devices under 0000:af:00.1: cvl_0_1 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.871 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:31:16.129 00:31:16.129 --- 10.0.0.2 ping statistics --- 00:31:16.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.129 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:31:16.129 00:31:16.129 --- 10.0.0.1 ping statistics --- 00:31:16.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.129 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3508266 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3508266 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3508266 ']' 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.129 05:59:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.129 [2024-12-16 05:59:49.902747] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:16.129 [2024-12-16 05:59:49.902793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.129 [2024-12-16 05:59:49.963522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.387 [2024-12-16 05:59:50.005389] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.387 [2024-12-16 05:59:50.005427] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.387 [2024-12-16 05:59:50.005436] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.387 [2024-12-16 05:59:50.005443] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.387 [2024-12-16 05:59:50.005449] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.387 [2024-12-16 05:59:50.005497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.387 [2024-12-16 05:59:50.005523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.387 [2024-12-16 05:59:50.005610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.387 [2024-12-16 05:59:50.005612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.387 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.387 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:16.387 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:16.644 [2024-12-16 05:59:50.278765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.644 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:16.644 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.644 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.644 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:16.901 Malloc1 00:31:16.901 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.159 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:17.159 05:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.416 [2024-12-16 05:59:51.140607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.416 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.673 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:17.674 05:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:17.931 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:17.931 fio-3.35 00:31:17.931 Starting 1 thread 00:31:20.461 00:31:20.461 test: (groupid=0, jobs=1): err= 0: pid=3508635: Mon Dec 16 05:59:53 2024 00:31:20.461 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(93.0MiB/2005msec) 00:31:20.461 slat (nsec): min=1539, max=237117, avg=1724.35, stdev=2158.02 00:31:20.461 clat (usec): min=3210, max=10922, avg=5957.98, stdev=438.04 00:31:20.461 lat (usec): min=3244, max=10923, avg=5959.70, stdev=437.95 00:31:20.461 clat percentiles (usec): 00:31:20.461 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:31:20.461 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:31:20.461 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:31:20.461 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 8979], 00:31:20.461 | 99.99th=[10159] 00:31:20.461 bw ( KiB/s): min=46528, max=48048, per=99.97%, avg=47466.00, stdev=685.31, samples=4 00:31:20.461 iops : min=11632, max=12012, avg=11866.50, stdev=171.33, samples=4 00:31:20.461 write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(92.6MiB/2005msec); 0 zone resets 00:31:20.461 slat (nsec): min=1584, max=247458, avg=1785.08, stdev=1764.82 00:31:20.461 clat (usec): min=2458, max=8934, avg=4811.58, stdev=365.88 00:31:20.461 lat (usec): min=2473, max=8936, avg=4813.37, stdev=365.83 00:31:20.461 clat percentiles (usec): 00:31:20.461 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:31:20.461 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:31:20.461 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5342], 00:31:20.461 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 7373], 99.95th=[ 8356], 00:31:20.461 | 99.99th=[ 8848] 00:31:20.461 bw ( KiB/s): min=46912, max=47792, per=100.00%, avg=47268.00, stdev=386.52, samples=4 00:31:20.461 iops : min=11728, max=11946, avg=11817.00, stdev=95.81, samples=4 00:31:20.461 lat (msec) : 4=0.71%, 10=99.28%, 20=0.01% 00:31:20.461 cpu : usr=74.80%, sys=23.95%, ctx=77, majf=0, minf=4 00:31:20.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:20.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:20.461 issued rwts: total=23800,23694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:20.461 00:31:20.461 Run status group 0 (all jobs): 00:31:20.461 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=93.0MiB (97.5MB), run=2005-2005msec 00:31:20.461 WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=92.6MiB (97.1MB), run=2005-2005msec 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:20.461 05:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:20.461 05:59:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:20.461 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:20.461 fio-3.35 00:31:20.461 Starting 1 thread 00:31:22.994 00:31:22.994 test: (groupid=0, jobs=1): err= 0: pid=3509200: Mon Dec 16 05:59:56 2024 00:31:22.994 read: IOPS=11.0k, BW=172MiB/s (180MB/s)(345MiB/2004msec) 00:31:22.994 slat (nsec): min=2486, max=86183, avg=2806.84, stdev=1161.05 00:31:22.994 clat (usec): min=1463, max=12742, avg=6736.88, stdev=1657.47 00:31:22.994 lat (usec): min=1465, max=12745, avg=6739.69, stdev=1657.53 00:31:22.994 clat percentiles (usec): 00:31:22.994 | 1.00th=[ 3523], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5342], 00:31:22.994 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7177], 00:31:22.994 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8848], 95.00th=[ 9634], 00:31:22.994 | 99.00th=[11338], 99.50th=[11731], 99.90th=[12518], 99.95th=[12649], 00:31:22.994 | 99.99th=[12649] 00:31:22.994 bw ( KiB/s): min=84160, max=95360, per=50.32%, avg=88608.00, stdev=5169.49, samples=4 00:31:22.994 iops : min= 5260, max= 5960, avg=5538.00, stdev=323.09, samples=4 00:31:22.994 write: IOPS=6485, BW=101MiB/s (106MB/s)(182MiB/1792msec); 0 zone resets 00:31:22.994 slat (usec): min=29, max=255, avg=31.47, stdev= 4.85 00:31:22.994 clat (usec): min=3834, max=15209, avg=8444.49, stdev=1417.75 00:31:22.994 lat (usec): min=3864, max=15243, avg=8475.96, stdev=1418.09 00:31:22.994 clat percentiles (usec): 00:31:22.994 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 7242], 00:31:22.994 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:31:22.994 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[10945], 00:31:22.994 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12911], 99.95th=[14484], 00:31:22.994 | 99.99th=[14877] 00:31:22.994 bw ( KiB/s): min=88704, max=99200, per=89.11%, avg=92464.00, stdev=4733.44, samples=4 00:31:22.994 iops : min= 5544, max= 6200, avg=5779.00, stdev=295.84, samples=4 00:31:22.994 lat (msec) : 2=0.05%, 4=2.31%, 10=90.15%, 20=7.48% 00:31:22.994 cpu : usr=86.42%, sys=12.63%, ctx=82, majf=0, minf=4 00:31:22.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:22.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.994 issued rwts: total=22057,11622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.994 00:31:22.994 Run status group 0 (all jobs): 00:31:22.994 READ: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=345MiB (361MB), run=2004-2004msec 00:31:22.994 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=182MiB (190MB), run=1792-1792msec 00:31:22.994 05:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.252 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:31:23.253 05:59:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:26.540 Nvme0n1 00:31:26.540 06:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:29.825 06:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=bc73e7cd-aa70-45e2-8142-8b528e9f925b 00:31:29.825 06:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb bc73e7cd-aa70-45e2-8142-8b528e9f925b 00:31:29.825 06:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=bc73e7cd-aa70-45e2-8142-8b528e9f925b 00:31:29.825 06:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:29.825 06:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:29.825 06:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:29.825 06:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:29.825 { 00:31:29.825 "uuid": "bc73e7cd-aa70-45e2-8142-8b528e9f925b", 00:31:29.825 "name": "lvs_0", 00:31:29.825 "base_bdev": "Nvme0n1", 00:31:29.825 "total_data_clusters": 930, 00:31:29.825 "free_clusters": 930, 00:31:29.825 "block_size": 512, 00:31:29.825 "cluster_size": 1073741824 00:31:29.825 } 00:31:29.825 ]' 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="bc73e7cd-aa70-45e2-8142-8b528e9f925b") .free_clusters' 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="bc73e7cd-aa70-45e2-8142-8b528e9f925b") .cluster_size' 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:29.825 952320 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:29.825 daa0ba80-32ee-4bd9-9795-2705915e1ea8 00:31:29.825 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:30.083 06:00:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:30.342 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:30.612 06:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:30.869 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:30.869 fio-3.35 00:31:30.869 Starting 1 thread 00:31:33.388 [2024-12-16 06:00:06.760117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0c90 is same with the state(6) to be set 00:31:33.388 00:31:33.388 test: (groupid=0, jobs=1): err= 0: pid=3511026: Mon Dec 16 06:00:06 2024 00:31:33.388 read: IOPS=8032, BW=31.4MiB/s (32.9MB/s)(62.9MiB/2006msec) 00:31:33.388 slat (nsec): min=1528, max=103577, avg=1720.77, stdev=1062.65 00:31:33.388 clat (usec): min=902, max=170067, avg=8777.12, stdev=10288.49 00:31:33.388 lat (usec): min=903, max=170085, avg=8778.84, stdev=10288.64 00:31:33.388 clat percentiles (msec): 00:31:33.388 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:31:33.388 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:31:33.388 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:33.388 | 99.00th=[ 10], 99.50th=[ 14], 99.90th=[ 169], 99.95th=[ 171], 00:31:33.388 | 99.99th=[ 171] 00:31:33.388 bw ( KiB/s): min=22824, max=35280, per=99.86%, avg=32088.00, stdev=6177.50, samples=4 00:31:33.388 iops : min= 5706, max= 8820, avg=8022.00, stdev=1544.37, samples=4 00:31:33.388 write: IOPS=8006, BW=31.3MiB/s (32.8MB/s)(62.7MiB/2006msec); 0 zone resets 00:31:33.388 slat (nsec): min=1564, max=77483, avg=1791.23, stdev=761.17 00:31:33.388 clat (usec): min=199, max=168629, avg=7080.11, stdev=9624.62 00:31:33.388 lat (usec): min=200, max=168633, avg=7081.90, stdev=9624.77 00:31:33.388 clat percentiles (msec): 00:31:33.388 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:31:33.388 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:33.388 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:33.388 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 169], 99.95th=[ 169], 00:31:33.388 | 99.99th=[ 169] 00:31:33.388 bw ( KiB/s): min=23656, max=34824, per=99.94%, avg=32010.00, stdev=5569.37, samples=4 00:31:33.388 iops : min= 5914, max= 8706, avg=8002.50, stdev=1392.34, samples=4 00:31:33.388 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:33.388 lat (msec) : 2=0.04%, 4=0.24%, 10=99.11%, 20=0.18%, 250=0.40% 00:31:33.388 cpu : usr=74.36%, sys=24.54%, ctx=126, majf=0, minf=4 00:31:33.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:33.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:33.388 issued rwts: total=16114,16062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:33.389 00:31:33.389 Run status group 0 (all jobs): 00:31:33.389 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=62.9MiB (66.0MB), run=2006-2006msec 00:31:33.389 WRITE: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=62.7MiB (65.8MB), run=2006-2006msec 00:31:33.389 06:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:33.389 06:00:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:34.319 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=410d8c85-97d5-4dcd-a895-95f9857a0d28 00:31:34.319 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 410d8c85-97d5-4dcd-a895-95f9857a0d28 00:31:34.319 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=410d8c85-97d5-4dcd-a895-95f9857a0d28 00:31:34.319 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:34.319 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:34.319 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:34.319 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:34.576 { 00:31:34.576 "uuid": "bc73e7cd-aa70-45e2-8142-8b528e9f925b", 00:31:34.576 "name": "lvs_0", 00:31:34.576 "base_bdev": "Nvme0n1", 00:31:34.576 "total_data_clusters": 930, 00:31:34.576 "free_clusters": 0, 00:31:34.576 "block_size": 512, 00:31:34.576 "cluster_size": 1073741824 00:31:34.576 }, 00:31:34.576 { 00:31:34.576 "uuid": "410d8c85-97d5-4dcd-a895-95f9857a0d28", 00:31:34.576 "name": "lvs_n_0", 00:31:34.576 "base_bdev": "daa0ba80-32ee-4bd9-9795-2705915e1ea8", 00:31:34.576 "total_data_clusters": 237847, 00:31:34.576 "free_clusters": 237847, 00:31:34.576 "block_size": 512, 00:31:34.576 "cluster_size": 4194304 00:31:34.576 } 00:31:34.576 ]' 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="410d8c85-97d5-4dcd-a895-95f9857a0d28") .free_clusters' 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="410d8c85-97d5-4dcd-a895-95f9857a0d28") .cluster_size' 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:34.576 951388 00:31:34.576 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:35.140 e9d5bee7-da2f-4dcc-a32d-5a851d257230 00:31:35.140 06:00:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:35.397 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:35.653 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:35.910 06:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:36.167 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:36.167 fio-3.35 00:31:36.167 Starting 1 thread 00:31:38.691 00:31:38.691 test: (groupid=0, jobs=1): err= 0: pid=3512432: Mon Dec 16 06:00:12 2024 00:31:38.691 read: IOPS=7824, BW=30.6MiB/s (32.0MB/s)(61.3MiB/2006msec) 00:31:38.691 slat (nsec): min=1534, max=112438, avg=1685.50, stdev=1312.79 00:31:38.691 clat (usec): min=3291, max=14796, avg=9052.35, stdev=764.48 00:31:38.691 lat (usec): min=3295, max=14798, avg=9054.04, stdev=764.42 00:31:38.691 clat percentiles (usec): 00:31:38.691 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:31:38.691 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:31:38.691 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290], 00:31:38.691 | 99.00th=[10683], 99.50th=[10945], 99.90th=[12649], 99.95th=[13960], 00:31:38.691 | 99.99th=[14746] 00:31:38.691 bw ( KiB/s): min=30176, max=31816, per=99.78%, avg=31228.00, stdev=743.17, samples=4 00:31:38.691 iops : min= 7544, max= 7954, avg=7807.00, stdev=185.79, samples=4 00:31:38.691 write: IOPS=7797, BW=30.5MiB/s (31.9MB/s)(61.1MiB/2006msec); 0 zone resets 00:31:38.691 slat (nsec): min=1564, max=113584, avg=1752.08, stdev=1191.89 00:31:38.691 clat (usec): min=1606, max=12785, avg=7268.38, stdev=637.63 00:31:38.691 lat (usec): min=1612, max=12787, avg=7270.13, stdev=637.60 00:31:38.691 clat percentiles (usec): 00:31:38.691 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6783], 00:31:38.691 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:31:38.691 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:31:38.691 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[10421], 99.95th=[11600], 00:31:38.691 | 99.99th=[12649] 00:31:38.691 bw ( KiB/s): min=31104, max=31296, per=100.00%, avg=31190.00, stdev=79.83, samples=4 00:31:38.691 iops : min= 7776, max= 7824, avg=7797.50, stdev=19.96, samples=4 00:31:38.691 lat (msec) : 2=0.01%, 4=0.07%, 10=95.10%, 20=4.82% 00:31:38.691 cpu : usr=67.33%, sys=31.62%, ctx=117, majf=0, minf=4 00:31:38.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:38.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.691 issued rwts: total=15695,15641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.691 00:31:38.691 Run status group 0 (all jobs): 00:31:38.691 READ: bw=30.6MiB/s (32.0MB/s), 30.6MiB/s-30.6MiB/s (32.0MB/s-32.0MB/s), io=61.3MiB (64.3MB), run=2006-2006msec 00:31:38.691 WRITE: bw=30.5MiB/s (31.9MB/s), 30.5MiB/s-30.5MiB/s (31.9MB/s-31.9MB/s), io=61.1MiB (64.1MB), run=2006-2006msec 00:31:38.691 06:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:38.691 06:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:38.691 06:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:42.873 06:00:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:42.873 06:00:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:46.151 06:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:46.151 06:00:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.535 rmmod nvme_tcp 00:31:47.535 rmmod nvme_fabrics 00:31:47.535 rmmod nvme_keyring 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 3508266 ']' 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 3508266 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3508266 ']' 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3508266 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3508266 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3508266' 00:31:47.535 killing process with pid 3508266 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3508266 00:31:47.535 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3508266 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.794 06:00:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.327 00:31:50.327 real 0m39.371s 00:31:50.327 user 2m39.762s 00:31:50.327 sys 0m8.443s 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.327 ************************************ 00:31:50.327 END TEST nvmf_fio_host 00:31:50.327 ************************************ 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.327 ************************************ 00:31:50.327 START TEST nvmf_failover 00:31:50.327 ************************************ 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:50.327 * Looking for test storage... 00:31:50.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.327 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:50.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.327 --rc genhtml_branch_coverage=1 00:31:50.327 --rc genhtml_function_coverage=1 00:31:50.327 --rc genhtml_legend=1 00:31:50.327 --rc geninfo_all_blocks=1 00:31:50.327 --rc geninfo_unexecuted_blocks=1 00:31:50.327 00:31:50.327 ' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:50.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.328 --rc genhtml_branch_coverage=1 00:31:50.328 --rc genhtml_function_coverage=1 00:31:50.328 --rc genhtml_legend=1 00:31:50.328 --rc geninfo_all_blocks=1 00:31:50.328 --rc geninfo_unexecuted_blocks=1 00:31:50.328 00:31:50.328 ' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:50.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.328 --rc genhtml_branch_coverage=1 00:31:50.328 --rc genhtml_function_coverage=1 00:31:50.328 --rc genhtml_legend=1 00:31:50.328 --rc geninfo_all_blocks=1 00:31:50.328 --rc geninfo_unexecuted_blocks=1 00:31:50.328 00:31:50.328 ' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:50.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.328 --rc genhtml_branch_coverage=1 00:31:50.328 --rc genhtml_function_coverage=1 00:31:50.328 --rc genhtml_legend=1 00:31:50.328 --rc geninfo_all_blocks=1 00:31:50.328 --rc geninfo_unexecuted_blocks=1 00:31:50.328 00:31:50.328 ' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:50.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.328 06:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:55.594 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:55.594 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:55.594 Found net devices under 0000:af:00.0: cvl_0_0 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:55.594 Found net devices under 0000:af:00.1: cvl_0_1 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.594 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.595 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:31:55.853 00:31:55.853 --- 10.0.0.2 ping statistics --- 00:31:55.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.853 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:31:55.853 00:31:55.853 --- 10.0.0.1 ping statistics --- 00:31:55.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.853 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:55.853 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=3517668 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 3517668 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3517668 ']' 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.854 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:55.854 [2024-12-16 06:00:29.596073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:55.854 [2024-12-16 06:00:29.596136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.854 [2024-12-16 06:00:29.658171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:55.854 [2024-12-16 06:00:29.698349] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.854 [2024-12-16 06:00:29.698390] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.854 [2024-12-16 06:00:29.698398] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.854 [2024-12-16 06:00:29.698406] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.854 [2024-12-16 06:00:29.698411] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.854 [2024-12-16 06:00:29.698486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.854 [2024-12-16 06:00:29.698574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.854 [2024-12-16 06:00:29.698575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.111 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.111 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:56.111 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:56.111 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:56.111 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:56.111 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.111 06:00:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:56.369 [2024-12-16 06:00:30.004907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.369 06:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:56.626 Malloc0 00:31:56.627 06:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.627 06:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.884 06:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.142 [2024-12-16 06:00:30.854014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.142 06:00:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:57.399 [2024-12-16 06:00:31.042442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:57.399 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:57.399 [2024-12-16 06:00:31.231041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3517925 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3517925 /var/tmp/bdevperf.sock 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3517925 ']' 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:57.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:57.657 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:58.221 NVMe0n1 00:31:58.221 06:00:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:58.479 00:31:58.479 06:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3518148 00:31:58.479 06:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:58.479 06:00:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:59.927 06:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.927 [2024-12-16 06:00:33.499685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.499993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 [2024-12-16 06:00:33.500295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2300 is same with the state(6) to be set 00:31:59.927 06:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:03.243 06:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:03.243 00:32:03.243 06:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:03.243 [2024-12-16 06:00:36.993131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.243 [2024-12-16 06:00:36.993421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 [2024-12-16 06:00:36.993675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3100 is same with the state(6) to be set 00:32:03.244 06:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:06.523 06:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.523 [2024-12-16 06:00:40.207565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.523 06:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:07.454 06:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:07.712 [2024-12-16 06:00:41.418720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3cc0 is same with the state(6) to be set 00:32:07.712 [2024-12-16 06:00:41.418758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3cc0 is same with the state(6) to be set 00:32:07.712 [2024-12-16 06:00:41.418766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae3cc0 is same with the state(6) to be set 00:32:07.712 06:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3518148 00:32:14.270 { 00:32:14.271 "results": [ 00:32:14.271 { 00:32:14.271 "job": "NVMe0n1", 00:32:14.271 "core_mask": "0x1", 00:32:14.271 "workload": "verify", 00:32:14.271 "status": "finished", 00:32:14.271 "verify_range": { 00:32:14.271 "start": 0, 00:32:14.271 "length": 16384 00:32:14.271 }, 00:32:14.271 "queue_depth": 128, 00:32:14.271 "io_size": 4096, 00:32:14.271 "runtime": 15.004156, 00:32:14.271 "iops": 11140.380038703943, 00:32:14.271 "mibps": 43.51710952618728, 00:32:14.271 "io_failed": 10053, 00:32:14.271 "io_timeout": 0, 00:32:14.271 "avg_latency_us": 10816.483592589158, 00:32:14.271 "min_latency_us": 417.40190476190475, 00:32:14.271 "max_latency_us": 13044.784761904762 00:32:14.271 } 00:32:14.271 ], 00:32:14.271 "core_count": 1 00:32:14.271 } 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3517925 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3517925 ']' 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3517925 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3517925 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3517925' 00:32:14.271 killing process with pid 3517925 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3517925 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3517925 00:32:14.271 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:14.271 [2024-12-16 06:00:31.309545] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:14.271 [2024-12-16 06:00:31.309602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517925 ] 00:32:14.271 [2024-12-16 06:00:31.366208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.271 [2024-12-16 06:00:31.405498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.271 Running I/O for 15 seconds... 00:32:14.271 11279.00 IOPS, 44.06 MiB/s [2024-12-16T05:00:48.127Z] [2024-12-16 06:00:33.501627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.501991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.501997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.502005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.502011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.502019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.502027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.502035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.502042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.502049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.502056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.502064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.502070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.502079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.502085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.271 [2024-12-16 06:00:33.502092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.271 [2024-12-16 06:00:33.502099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.272 [2024-12-16 06:00:33.502615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.272 [2024-12-16 06:00:33.502680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.272 [2024-12-16 06:00:33.502687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.502988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.502995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.273 [2024-12-16 06:00:33.503277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.273 [2024-12-16 06:00:33.503284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.274 [2024-12-16 06:00:33.503298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.274 [2024-12-16 06:00:33.503314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100760 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100768 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100776 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100784 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100792 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100800 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.274 [2024-12-16 06:00:33.503693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.274 [2024-12-16 06:00:33.503698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:32:14.274 [2024-12-16 06:00:33.503705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503744] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c211c0 was disconnected and freed. reset controller. 00:32:14.274 [2024-12-16 06:00:33.503752] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:14.274 [2024-12-16 06:00:33.503772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.274 [2024-12-16 06:00:33.503783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.274 [2024-12-16 06:00:33.503798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.274 [2024-12-16 06:00:33.503811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.274 [2024-12-16 06:00:33.503826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:33.503832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.274 [2024-12-16 06:00:33.503865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfeac0 (9): Bad file descriptor 00:32:14.274 [2024-12-16 06:00:33.506596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.274 [2024-12-16 06:00:33.581755] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:14.274 10898.50 IOPS, 42.57 MiB/s [2024-12-16T05:00:48.130Z] 11049.00 IOPS, 43.16 MiB/s [2024-12-16T05:00:48.130Z] 11152.75 IOPS, 43.57 MiB/s [2024-12-16T05:00:48.130Z] [2024-12-16 06:00:36.995004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.274 [2024-12-16 06:00:36.995037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:36.995052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.274 [2024-12-16 06:00:36.995060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:36.995070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.274 [2024-12-16 06:00:36.995077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:36.995086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.274 [2024-12-16 06:00:36.995094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.274 [2024-12-16 06:00:36.995103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.275 [2024-12-16 06:00:36.995428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.275 [2024-12-16 06:00:36.995669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.275 [2024-12-16 06:00:36.995676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.276 [2024-12-16 06:00:36.995708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.995984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.995992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.276 [2024-12-16 06:00:36.996315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.276 [2024-12-16 06:00:36.996321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.277 [2024-12-16 06:00:36.996536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44880 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44888 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44896 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44904 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44912 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.277 [2024-12-16 06:00:36.996894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:32:14.277 [2024-12-16 06:00:36.996900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.277 [2024-12-16 06:00:36.996906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.277 [2024-12-16 06:00:36.996911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.996916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.996922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.996929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.996933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.996939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.996945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.996951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.996956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.996961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44952 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.996968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.996974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.996979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.996984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.996990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.996996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44968 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44976 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44984 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44992 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45000 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45008 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45016 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45024 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45032 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45040 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45048 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45056 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45064 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.278 [2024-12-16 06:00:36.997304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.278 [2024-12-16 06:00:36.997309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45072 len:8 PRP1 0x0 PRP2 0x0 00:32:14.278 [2024-12-16 06:00:36.997316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997356] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c21d20 was disconnected and freed. reset controller. 00:32:14.278 [2024-12-16 06:00:36.997364] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:14.278 [2024-12-16 06:00:36.997385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.278 [2024-12-16 06:00:36.997392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.278 [2024-12-16 06:00:36.997406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.278 [2024-12-16 06:00:36.997419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.278 [2024-12-16 06:00:36.997433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:36.997439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.278 [2024-12-16 06:00:36.997460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfeac0 (9): Bad file descriptor 00:32:14.278 [2024-12-16 06:00:37.000204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.278 [2024-12-16 06:00:37.030678] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:14.278 11126.60 IOPS, 43.46 MiB/s [2024-12-16T05:00:48.134Z] 11157.50 IOPS, 43.58 MiB/s [2024-12-16T05:00:48.134Z] 11189.57 IOPS, 43.71 MiB/s [2024-12-16T05:00:48.134Z] 11208.88 IOPS, 43.78 MiB/s [2024-12-16T05:00:48.134Z] [2024-12-16 06:00:41.418925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.278 [2024-12-16 06:00:41.418970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:41.418984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.278 [2024-12-16 06:00:41.418992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:41.419000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.278 [2024-12-16 06:00:41.419007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.278 [2024-12-16 06:00:41.419015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.279 [2024-12-16 06:00:41.419520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.279 [2024-12-16 06:00:41.419528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.279 [2024-12-16 06:00:41.419536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.419989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.419997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.280 [2024-12-16 06:00:41.420125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.280 [2024-12-16 06:00:41.420132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.281 [2024-12-16 06:00:41.420559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.281 [2024-12-16 06:00:41.420720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.281 [2024-12-16 06:00:41.420727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.282 [2024-12-16 06:00:41.420734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.282 [2024-12-16 06:00:41.420749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.282 [2024-12-16 06:00:41.420763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.282 [2024-12-16 06:00:41.420777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.282 [2024-12-16 06:00:41.420792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.282 [2024-12-16 06:00:41.420808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.282 [2024-12-16 06:00:41.420823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:14.282 [2024-12-16 06:00:41.420855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:14.282 [2024-12-16 06:00:41.420861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61112 len:8 PRP1 0x0 PRP2 0x0 00:32:14.282 [2024-12-16 06:00:41.420867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420909] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d57e50 was disconnected and freed. reset controller. 00:32:14.282 [2024-12-16 06:00:41.420918] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:14.282 [2024-12-16 06:00:41.420937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.282 [2024-12-16 06:00:41.420945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.282 [2024-12-16 06:00:41.420959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.282 [2024-12-16 06:00:41.420972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.282 [2024-12-16 06:00:41.420986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.282 [2024-12-16 06:00:41.420992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.282 [2024-12-16 06:00:41.425038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.282 11212.67 IOPS, 43.80 MiB/s [2024-12-16T05:00:48.138Z] [2024-12-16 06:00:41.425073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfeac0 (9): Bad file descriptor 00:32:14.282 [2024-12-16 06:00:41.533658] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:14.282 11102.80 IOPS, 43.37 MiB/s [2024-12-16T05:00:48.138Z] 11101.18 IOPS, 43.36 MiB/s [2024-12-16T05:00:48.138Z] 11110.00 IOPS, 43.40 MiB/s [2024-12-16T05:00:48.138Z] 11114.62 IOPS, 43.42 MiB/s [2024-12-16T05:00:48.138Z] 11135.79 IOPS, 43.50 MiB/s 00:32:14.282 Latency(us) 00:32:14.282 [2024-12-16T05:00:48.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.282 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:14.282 Verification LBA range: start 0x0 length 0x4000 00:32:14.282 NVMe0n1 : 15.00 11140.38 43.52 670.01 0.00 10816.48 417.40 13044.78 00:32:14.282 [2024-12-16T05:00:48.138Z] =================================================================================================================== 00:32:14.282 [2024-12-16T05:00:48.138Z] Total : 11140.38 43.52 670.01 0.00 10816.48 417.40 13044.78 00:32:14.282 Received shutdown signal, test time was about 15.000000 seconds 00:32:14.282 00:32:14.282 Latency(us) 00:32:14.282 [2024-12-16T05:00:48.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.282 [2024-12-16T05:00:48.138Z] =================================================================================================================== 00:32:14.282 [2024-12-16T05:00:48.138Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3520605 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3520605 /var/tmp/bdevperf.sock 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3520605 ']' 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:14.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:14.282 06:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:14.282 [2024-12-16 06:00:48.106038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:14.541 06:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:14.542 [2024-12-16 06:00:48.306638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:14.542 06:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:14.799 NVMe0n1 00:32:14.799 06:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:15.363 00:32:15.363 06:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:15.363 00:32:15.620 06:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:15.620 06:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:15.620 06:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:15.878 06:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:19.153 06:00:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:19.153 06:00:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:19.153 06:00:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3521306 00:32:19.153 06:00:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:19.153 06:00:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3521306 00:32:20.085 { 00:32:20.085 "results": [ 00:32:20.085 { 00:32:20.085 "job": "NVMe0n1", 00:32:20.085 "core_mask": "0x1", 00:32:20.085 "workload": "verify", 00:32:20.085 "status": "finished", 00:32:20.085 "verify_range": { 00:32:20.085 "start": 0, 00:32:20.085 "length": 16384 00:32:20.085 }, 00:32:20.085 "queue_depth": 128, 00:32:20.085 "io_size": 4096, 00:32:20.085 "runtime": 1.00788, 00:32:20.085 "iops": 11449.775766956383, 00:32:20.085 "mibps": 44.72568658967337, 00:32:20.085 "io_failed": 0, 00:32:20.085 "io_timeout": 0, 00:32:20.085 "avg_latency_us": 11129.544244945118, 00:32:20.085 "min_latency_us": 916.7238095238096, 00:32:20.085 "max_latency_us": 9487.11619047619 00:32:20.085 } 00:32:20.085 ], 00:32:20.085 "core_count": 1 00:32:20.085 } 00:32:20.342 06:00:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:20.342 [2024-12-16 06:00:47.752229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:20.342 [2024-12-16 06:00:47.752282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520605 ] 00:32:20.342 [2024-12-16 06:00:47.807494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.342 [2024-12-16 06:00:47.843070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.342 [2024-12-16 06:00:49.589924] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:20.342 [2024-12-16 06:00:49.589969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:20.342 [2024-12-16 06:00:49.589980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.342 [2024-12-16 06:00:49.589989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:20.342 [2024-12-16 06:00:49.589997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.342 [2024-12-16 06:00:49.590004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:20.342 [2024-12-16 06:00:49.590011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.342 [2024-12-16 06:00:49.590019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:20.342 [2024-12-16 06:00:49.590025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.342 [2024-12-16 06:00:49.590032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.342 [2024-12-16 06:00:49.590059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.342 [2024-12-16 06:00:49.590073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a3ac0 (9): Bad file descriptor 00:32:20.342 [2024-12-16 06:00:49.692012] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:20.342 Running I/O for 1 seconds... 00:32:20.342 11397.00 IOPS, 44.52 MiB/s 00:32:20.342 Latency(us) 00:32:20.342 [2024-12-16T05:00:54.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.342 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:20.342 Verification LBA range: start 0x0 length 0x4000 00:32:20.342 NVMe0n1 : 1.01 11449.78 44.73 0.00 0.00 11129.54 916.72 9487.12 00:32:20.342 [2024-12-16T05:00:54.198Z] =================================================================================================================== 00:32:20.342 [2024-12-16T05:00:54.198Z] Total : 11449.78 44.73 0.00 0.00 11129.54 916.72 9487.12 00:32:20.342 06:00:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:20.342 06:00:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:20.342 06:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:20.600 06:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:20.600 06:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:20.857 06:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:21.113 06:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3520605 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3520605 ']' 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3520605 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:24.388 06:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3520605 00:32:24.388 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:24.388 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:24.388 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3520605' 00:32:24.388 killing process with pid 3520605 00:32:24.388 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3520605 00:32:24.388 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3520605 00:32:24.388 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:24.388 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.645 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:24.645 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:24.645 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.646 rmmod nvme_tcp 00:32:24.646 rmmod nvme_fabrics 00:32:24.646 rmmod nvme_keyring 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 3517668 ']' 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 3517668 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3517668 ']' 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3517668 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:24.646 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3517668 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3517668' 00:32:24.904 killing process with pid 3517668 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3517668 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3517668 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.904 06:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.437 00:32:27.437 real 0m37.117s 00:32:27.437 user 1m58.410s 00:32:27.437 sys 0m7.635s 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:27.437 ************************************ 00:32:27.437 END TEST nvmf_failover 00:32:27.437 ************************************ 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.437 ************************************ 00:32:27.437 START TEST nvmf_host_discovery 00:32:27.437 ************************************ 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:27.437 * Looking for test storage... 00:32:27.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:32:27.437 06:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:27.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.437 --rc genhtml_branch_coverage=1 00:32:27.437 --rc genhtml_function_coverage=1 00:32:27.437 --rc genhtml_legend=1 00:32:27.437 --rc geninfo_all_blocks=1 00:32:27.437 --rc geninfo_unexecuted_blocks=1 00:32:27.437 00:32:27.437 ' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:27.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.437 --rc genhtml_branch_coverage=1 00:32:27.437 --rc genhtml_function_coverage=1 00:32:27.437 --rc genhtml_legend=1 00:32:27.437 --rc geninfo_all_blocks=1 00:32:27.437 --rc geninfo_unexecuted_blocks=1 00:32:27.437 00:32:27.437 ' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:27.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.437 --rc genhtml_branch_coverage=1 00:32:27.437 --rc genhtml_function_coverage=1 00:32:27.437 --rc genhtml_legend=1 00:32:27.437 --rc geninfo_all_blocks=1 00:32:27.437 --rc geninfo_unexecuted_blocks=1 00:32:27.437 00:32:27.437 ' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:27.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.437 --rc genhtml_branch_coverage=1 00:32:27.437 --rc genhtml_function_coverage=1 00:32:27.437 --rc genhtml_legend=1 00:32:27.437 --rc geninfo_all_blocks=1 00:32:27.437 --rc geninfo_unexecuted_blocks=1 00:32:27.437 00:32:27.437 ' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.437 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.438 06:01:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.704 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:32.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:32.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:32.705 Found net devices under 0000:af:00.0: cvl_0_0 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:32.705 Found net devices under 0000:af:00.1: cvl_0_1 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.705 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:32:32.964 00:32:32.964 --- 10.0.0.2 ping statistics --- 00:32:32.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.964 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:32:32.964 00:32:32.964 --- 10.0.0.1 ping statistics --- 00:32:32.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.964 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=3525654 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 3525654 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3525654 ']' 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:32.964 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.964 [2024-12-16 06:01:06.793855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:32.964 [2024-12-16 06:01:06.793898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.223 [2024-12-16 06:01:06.854184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.223 [2024-12-16 06:01:06.892366] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.223 [2024-12-16 06:01:06.892403] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.223 [2024-12-16 06:01:06.892410] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.223 [2024-12-16 06:01:06.892416] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.223 [2024-12-16 06:01:06.892421] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.223 [2024-12-16 06:01:06.892459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.223 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:33.223 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:33.223 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:33.223 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.223 06:01:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.223 [2024-12-16 06:01:07.020204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.223 [2024-12-16 06:01:07.032380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.223 null0 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.223 null1 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3525722 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3525722 /tmp/host.sock 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3525722 ']' 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:33.223 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:33.223 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.481 [2024-12-16 06:01:07.106709] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:33.481 [2024-12-16 06:01:07.106748] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525722 ] 00:32:33.481 [2024-12-16 06:01:07.160527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.481 [2024-12-16 06:01:07.201221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.481 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.739 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.740 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.740 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.740 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.740 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.998 [2024-12-16 06:01:07.609842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:33.998 06:01:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:34.563 [2024-12-16 06:01:08.358364] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:34.563 [2024-12-16 06:01:08.358385] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:34.563 [2024-12-16 06:01:08.358397] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:34.820 [2024-12-16 06:01:08.484775] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:34.820 [2024-12-16 06:01:08.581420] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:34.820 [2024-12-16 06:01:08.581438] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.078 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.336 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:35.337 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:35.337 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:35.337 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:35.337 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.337 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.337 06:01:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.337 [2024-12-16 06:01:09.125916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:35.337 [2024-12-16 06:01:09.126444] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:35.337 [2024-12-16 06:01:09.126465] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.337 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.595 [2024-12-16 06:01:09.253843] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:35.595 06:01:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:35.853 [2024-12-16 06:01:09.556116] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:35.853 [2024-12-16 06:01:09.556133] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:35.853 [2024-12-16 06:01:09.556138] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.786 [2024-12-16 06:01:10.381475] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:36.786 [2024-12-16 06:01:10.381501] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:36.786 [2024-12-16 06:01:10.383976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.786 [2024-12-16 06:01:10.383993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.786 [2024-12-16 06:01:10.384002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.786 [2024-12-16 06:01:10.384009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.786 [2024-12-16 06:01:10.384017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.786 [2024-12-16 06:01:10.384024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.786 [2024-12-16 06:01:10.384031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.786 [2024-12-16 06:01:10.384041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.786 [2024-12-16 06:01:10.384048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.786 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.786 [2024-12-16 06:01:10.393988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.786 [2024-12-16 06:01:10.404025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.786 [2024-12-16 06:01:10.404317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.786 [2024-12-16 06:01:10.404333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.786 [2024-12-16 06:01:10.404341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.404353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.404363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.404369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.404378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.404388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.787 [2024-12-16 06:01:10.414079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.414275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.414287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.414295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.414305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.414315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.414325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.414332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.414341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-12-16 06:01:10.424129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.424424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.424437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.424444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.424454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.424464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.424471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.424477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.424486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-12-16 06:01:10.434181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.434448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.434460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.434468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.434478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.434487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.434493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.434499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.434508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.787 [2024-12-16 06:01:10.444232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.444354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.444365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.444373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.444382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.444392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.444398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.444404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.444413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-12-16 06:01:10.454282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.454414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.454428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.454435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.454446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.454456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.454462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.454468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.454478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-12-16 06:01:10.464345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.464647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.464660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.464667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.464677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.464686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.464692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.464698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.464708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 [2024-12-16 06:01:10.474396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.474589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.474602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.474610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.474620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.474630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.474636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.474643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.787 [2024-12-16 06:01:10.474652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.787 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.787 [2024-12-16 06:01:10.484452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.787 [2024-12-16 06:01:10.484632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.787 [2024-12-16 06:01:10.484644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.787 [2024-12-16 06:01:10.484651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.787 [2024-12-16 06:01:10.484661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.787 [2024-12-16 06:01:10.484670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.787 [2024-12-16 06:01:10.484676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.787 [2024-12-16 06:01:10.484682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.788 [2024-12-16 06:01:10.484691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.788 [2024-12-16 06:01:10.494501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.788 [2024-12-16 06:01:10.494614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-12-16 06:01:10.494626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-12-16 06:01:10.494634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.788 [2024-12-16 06:01:10.494645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.788 [2024-12-16 06:01:10.494654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.788 [2024-12-16 06:01:10.494660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.788 [2024-12-16 06:01:10.494666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.788 [2024-12-16 06:01:10.494674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.788 [2024-12-16 06:01:10.504556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:36.788 [2024-12-16 06:01:10.504822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.788 [2024-12-16 06:01:10.504833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1995710 with addr=10.0.0.2, port=4420 00:32:36.788 [2024-12-16 06:01:10.504840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995710 is same with the state(6) to be set 00:32:36.788 [2024-12-16 06:01:10.504854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1995710 (9): Bad file descriptor 00:32:36.788 [2024-12-16 06:01:10.504863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.788 [2024-12-16 06:01:10.504869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:36.788 [2024-12-16 06:01:10.504891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.788 [2024-12-16 06:01:10.504901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:36.788 [2024-12-16 06:01:10.507993] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:36.788 [2024-12-16 06:01:10.508007] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:36.788 06:01:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:37.720 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.720 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:37.720 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:37.721 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:37.721 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:37.721 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.721 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:37.721 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.721 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:37.721 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.979 06:01:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.351 [2024-12-16 06:01:12.850326] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:39.351 [2024-12-16 06:01:12.850342] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:39.351 [2024-12-16 06:01:12.850353] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.351 [2024-12-16 06:01:12.976737] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:39.351 [2024-12-16 06:01:13.085548] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:39.351 [2024-12-16 06:01:13.085575] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.351 request: 00:32:39.351 { 00:32:39.351 "name": "nvme", 00:32:39.351 "trtype": "tcp", 00:32:39.351 "traddr": "10.0.0.2", 00:32:39.351 "adrfam": "ipv4", 00:32:39.351 "trsvcid": "8009", 00:32:39.351 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:39.351 "wait_for_attach": true, 00:32:39.351 "method": "bdev_nvme_start_discovery", 00:32:39.351 "req_id": 1 00:32:39.351 } 00:32:39.351 Got JSON-RPC error response 00:32:39.351 response: 00:32:39.351 { 00:32:39.351 "code": -17, 00:32:39.351 "message": "File exists" 00:32:39.351 } 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.351 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.352 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.609 request: 00:32:39.609 { 00:32:39.609 "name": "nvme_second", 00:32:39.609 "trtype": "tcp", 00:32:39.609 "traddr": "10.0.0.2", 00:32:39.609 "adrfam": "ipv4", 00:32:39.610 "trsvcid": "8009", 00:32:39.610 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:39.610 "wait_for_attach": true, 00:32:39.610 "method": "bdev_nvme_start_discovery", 00:32:39.610 "req_id": 1 00:32:39.610 } 00:32:39.610 Got JSON-RPC error response 00:32:39.610 response: 00:32:39.610 { 00:32:39.610 "code": -17, 00:32:39.610 "message": "File exists" 00:32:39.610 } 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.610 06:01:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.542 [2024-12-16 06:01:14.325343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.542 [2024-12-16 06:01:14.325370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6bd0 with addr=10.0.0.2, port=8010 00:32:40.542 [2024-12-16 06:01:14.325383] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:40.542 [2024-12-16 06:01:14.325390] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:40.542 [2024-12-16 06:01:14.325396] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:41.473 [2024-12-16 06:01:15.327869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.473 [2024-12-16 06:01:15.327895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c6bd0 with addr=10.0.0.2, port=8010 00:32:41.473 [2024-12-16 06:01:15.327907] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:41.473 [2024-12-16 06:01:15.327913] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:41.473 [2024-12-16 06:01:15.327919] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:42.846 [2024-12-16 06:01:16.330016] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:42.846 request: 00:32:42.846 { 00:32:42.846 "name": "nvme_second", 00:32:42.846 "trtype": "tcp", 00:32:42.846 "traddr": "10.0.0.2", 00:32:42.846 "adrfam": "ipv4", 00:32:42.846 "trsvcid": "8010", 00:32:42.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:42.846 "wait_for_attach": false, 00:32:42.846 "attach_timeout_ms": 3000, 00:32:42.846 "method": "bdev_nvme_start_discovery", 00:32:42.846 "req_id": 1 00:32:42.846 } 00:32:42.846 Got JSON-RPC error response 00:32:42.846 response: 00:32:42.846 { 00:32:42.846 "code": -110, 00:32:42.846 "message": "Connection timed out" 00:32:42.846 } 00:32:42.846 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3525722 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.847 rmmod nvme_tcp 00:32:42.847 rmmod nvme_fabrics 00:32:42.847 rmmod nvme_keyring 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 3525654 ']' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 3525654 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3525654 ']' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3525654 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3525654 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3525654' 00:32:42.847 killing process with pid 3525654 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3525654 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3525654 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.847 06:01:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.436 00:32:45.436 real 0m17.870s 00:32:45.436 user 0m22.339s 00:32:45.436 sys 0m5.701s 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.436 ************************************ 00:32:45.436 END TEST nvmf_host_discovery 00:32:45.436 ************************************ 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.436 ************************************ 00:32:45.436 START TEST nvmf_host_multipath_status 00:32:45.436 ************************************ 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:45.436 * Looking for test storage... 00:32:45.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.436 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.436 --rc genhtml_branch_coverage=1 00:32:45.436 --rc genhtml_function_coverage=1 00:32:45.437 --rc genhtml_legend=1 00:32:45.437 --rc geninfo_all_blocks=1 00:32:45.437 --rc geninfo_unexecuted_blocks=1 00:32:45.437 00:32:45.437 ' 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.437 --rc genhtml_branch_coverage=1 00:32:45.437 --rc genhtml_function_coverage=1 00:32:45.437 --rc genhtml_legend=1 00:32:45.437 --rc geninfo_all_blocks=1 00:32:45.437 --rc geninfo_unexecuted_blocks=1 00:32:45.437 00:32:45.437 ' 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.437 --rc genhtml_branch_coverage=1 00:32:45.437 --rc genhtml_function_coverage=1 00:32:45.437 --rc genhtml_legend=1 00:32:45.437 --rc geninfo_all_blocks=1 00:32:45.437 --rc geninfo_unexecuted_blocks=1 00:32:45.437 00:32:45.437 ' 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.437 --rc genhtml_branch_coverage=1 00:32:45.437 --rc genhtml_function_coverage=1 00:32:45.437 --rc genhtml_legend=1 00:32:45.437 --rc geninfo_all_blocks=1 00:32:45.437 --rc geninfo_unexecuted_blocks=1 00:32:45.437 00:32:45.437 ' 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.437 06:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:45.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.437 06:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:50.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:50.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:50.703 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:50.704 Found net devices under 0000:af:00.0: cvl_0_0 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:50.704 Found net devices under 0000:af:00.1: cvl_0_1 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:32:50.704 00:32:50.704 --- 10.0.0.2 ping statistics --- 00:32:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.704 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:32:50.704 00:32:50.704 --- 10.0.0.1 ping statistics --- 00:32:50.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.704 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=3530879 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 3530879 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3530879 ']' 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.704 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.704 [2024-12-16 06:01:24.460107] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:50.704 [2024-12-16 06:01:24.460147] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.704 [2024-12-16 06:01:24.518869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:50.962 [2024-12-16 06:01:24.558444] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.962 [2024-12-16 06:01:24.558479] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.962 [2024-12-16 06:01:24.558486] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.962 [2024-12-16 06:01:24.558492] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.962 [2024-12-16 06:01:24.558497] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.962 [2024-12-16 06:01:24.558541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.962 [2024-12-16 06:01:24.558544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3530879 00:32:50.962 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:51.220 [2024-12-16 06:01:24.844012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.220 06:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:51.220 Malloc0 00:32:51.220 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:51.477 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.734 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.991 [2024-12-16 06:01:25.596508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:51.991 [2024-12-16 06:01:25.792969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3531120 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3531120 /var/tmp/bdevperf.sock 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3531120 ']' 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:51.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:51.991 06:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.250 06:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:52.250 06:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:52.250 06:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:52.507 06:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:53.073 Nvme0n1 00:32:53.073 06:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:53.331 Nvme0n1 00:32:53.331 06:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:53.331 06:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:55.230 06:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:55.230 06:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:55.488 06:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:55.746 06:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:56.680 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:56.680 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:56.680 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.680 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:56.938 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.938 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:56.938 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.938 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.196 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.196 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.196 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.196 06:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.196 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.196 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.196 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.196 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:57.453 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.453 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:57.453 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.453 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:57.710 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.710 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:57.710 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:57.710 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.968 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.968 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:57.968 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:58.225 06:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:58.225 06:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.598 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.856 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.856 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.857 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.857 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:00.114 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.114 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:00.114 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.114 06:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.372 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.372 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:00.372 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.372 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.372 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.372 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:00.373 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:00.630 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:00.889 06:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:01.821 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:01.821 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:01.821 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.821 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.078 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.078 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:02.078 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.078 06:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.336 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.336 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.336 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.336 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.594 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.594 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.594 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.594 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.852 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.110 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.110 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:03.110 06:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.368 06:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:03.626 06:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:04.560 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:04.560 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.560 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.560 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.818 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.818 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:04.818 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.818 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.076 06:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.334 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.334 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.334 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.334 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.592 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.592 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:05.592 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.592 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.850 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.850 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:05.850 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:05.850 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:06.108 06:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:07.041 06:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:07.041 06:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:07.298 06:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.298 06:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.298 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.298 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.298 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.298 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.556 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.556 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.556 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.556 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.814 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.814 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.814 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.814 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.072 06:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.330 06:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.330 06:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:08.330 06:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:08.587 06:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:08.845 06:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:09.782 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:09.782 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:09.782 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.782 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.041 06:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.299 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.299 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.299 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.299 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.557 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.557 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:10.557 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:10.557 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.815 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.815 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:10.815 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.815 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.073 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.073 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:11.073 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:11.073 06:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:11.331 06:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:11.588 06:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:12.521 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:12.521 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:12.521 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.521 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.779 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.779 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:12.779 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.779 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.037 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.037 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.037 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.037 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.295 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.295 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.295 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.296 06:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.554 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.812 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.812 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:13.812 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:14.069 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:14.327 06:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:15.259 06:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:15.259 06:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:15.259 06:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.259 06:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:15.517 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.517 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:15.517 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.517 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.776 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.034 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.034 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.034 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.034 06:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.292 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.292 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:16.292 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.292 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.549 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.550 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:16.550 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:16.808 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:16.808 06:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.183 06:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.441 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:18.700 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.700 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:18.700 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.700 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:18.958 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.958 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:18.958 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:18.958 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.216 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.216 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:19.216 06:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:19.216 06:01:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:19.474 06:01:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:20.848 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:20.848 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.849 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.107 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.107 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.107 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.107 06:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:21.365 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.365 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:21.365 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.365 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.623 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.623 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:21.623 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.623 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3531120 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3531120 ']' 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3531120 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3531120 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3531120' 00:33:21.884 killing process with pid 3531120 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3531120 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3531120 00:33:21.884 { 00:33:21.884 "results": [ 00:33:21.884 { 00:33:21.884 "job": "Nvme0n1", 00:33:21.884 "core_mask": "0x4", 00:33:21.884 "workload": "verify", 00:33:21.884 "status": "terminated", 00:33:21.884 "verify_range": { 00:33:21.884 "start": 0, 00:33:21.884 "length": 16384 00:33:21.884 }, 00:33:21.884 "queue_depth": 128, 00:33:21.884 "io_size": 4096, 00:33:21.884 "runtime": 28.31456, 00:33:21.884 "iops": 10523.808245651708, 00:33:21.884 "mibps": 41.10862595957698, 00:33:21.884 "io_failed": 0, 00:33:21.884 "io_timeout": 0, 00:33:21.884 "avg_latency_us": 12143.420468815348, 00:33:21.884 "min_latency_us": 397.89714285714285, 00:33:21.884 "max_latency_us": 3019898.88 00:33:21.884 } 00:33:21.884 ], 00:33:21.884 "core_count": 1 00:33:21.884 } 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3531120 00:33:21.884 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:21.884 [2024-12-16 06:01:25.857608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:21.884 [2024-12-16 06:01:25.857661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531120 ] 00:33:21.884 [2024-12-16 06:01:25.907836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.884 [2024-12-16 06:01:25.947211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:21.884 [2024-12-16 06:01:26.989457] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:33:21.884 Running I/O for 90 seconds... 00:33:21.884 11307.00 IOPS, 44.17 MiB/s [2024-12-16T05:01:55.740Z] 11373.50 IOPS, 44.43 MiB/s [2024-12-16T05:01:55.740Z] 11348.33 IOPS, 44.33 MiB/s [2024-12-16T05:01:55.740Z] 11370.50 IOPS, 44.42 MiB/s [2024-12-16T05:01:55.740Z] 11379.00 IOPS, 44.45 MiB/s [2024-12-16T05:01:55.740Z] 11388.67 IOPS, 44.49 MiB/s [2024-12-16T05:01:55.740Z] 11396.00 IOPS, 44.52 MiB/s [2024-12-16T05:01:55.740Z] 11397.12 IOPS, 44.52 MiB/s [2024-12-16T05:01:55.740Z] 11388.67 IOPS, 44.49 MiB/s [2024-12-16T05:01:55.740Z] 11384.90 IOPS, 44.47 MiB/s [2024-12-16T05:01:55.740Z] 11377.55 IOPS, 44.44 MiB/s [2024-12-16T05:01:55.740Z] 11363.08 IOPS, 44.39 MiB/s [2024-12-16T05:01:55.740Z] [2024-12-16 06:01:39.659413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.884 [2024-12-16 06:01:39.659451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:21.884 [2024-12-16 06:01:39.659487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.884 [2024-12-16 06:01:39.659495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:21.884 [2024-12-16 06:01:39.659508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.884 [2024-12-16 06:01:39.659516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:21.884 [2024-12-16 06:01:39.659529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.884 [2024-12-16 06:01:39.659536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.659842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.659854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.885 [2024-12-16 06:01:39.660676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:21.885 [2024-12-16 06:01:39.660689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.660982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.660996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.886 [2024-12-16 06:01:39.661483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:21.886 [2024-12-16 06:01:39.661497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.887 [2024-12-16 06:01:39.661727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.887 [2024-12-16 06:01:39.661750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.887 [2024-12-16 06:01:39.661772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.887 [2024-12-16 06:01:39.661797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.661983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.661989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.887 [2024-12-16 06:01:39.662512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.887 [2024-12-16 06:01:39.662519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:39.662536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:39.662543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:39.662560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:39.662566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:39.662583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:39.662590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:39.662606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:39.662613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:39.662629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:39.662636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:21.888 10885.15 IOPS, 42.52 MiB/s [2024-12-16T05:01:55.744Z] 10107.64 IOPS, 39.48 MiB/s [2024-12-16T05:01:55.744Z] 9433.80 IOPS, 36.85 MiB/s [2024-12-16T05:01:55.744Z] 9232.31 IOPS, 36.06 MiB/s [2024-12-16T05:01:55.744Z] 9355.94 IOPS, 36.55 MiB/s [2024-12-16T05:01:55.744Z] 9478.72 IOPS, 37.03 MiB/s [2024-12-16T05:01:55.744Z] 9690.47 IOPS, 37.85 MiB/s [2024-12-16T05:01:55.744Z] 9876.65 IOPS, 38.58 MiB/s [2024-12-16T05:01:55.744Z] 10009.90 IOPS, 39.10 MiB/s [2024-12-16T05:01:55.744Z] 10069.55 IOPS, 39.33 MiB/s [2024-12-16T05:01:55.744Z] 10119.52 IOPS, 39.53 MiB/s [2024-12-16T05:01:55.744Z] 10222.00 IOPS, 39.93 MiB/s [2024-12-16T05:01:55.744Z] 10348.80 IOPS, 40.42 MiB/s [2024-12-16T05:01:55.744Z] 10468.92 IOPS, 40.89 MiB/s [2024-12-16T05:01:55.744Z] [2024-12-16 06:01:53.255259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.255295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.255364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.255461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.255480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.255609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.888 [2024-12-16 06:01:53.255616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.257975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.257998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:21.888 [2024-12-16 06:01:53.258792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.888 [2024-12-16 06:01:53.258802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.258983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.258989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.259001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.259008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.259020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.259026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.259039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.259047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:21.889 [2024-12-16 06:01:53.259060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.889 [2024-12-16 06:01:53.259066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:21.889 10496.33 IOPS, 41.00 MiB/s [2024-12-16T05:01:55.745Z] 10516.82 IOPS, 41.08 MiB/s [2024-12-16T05:01:55.745Z] Received shutdown signal, test time was about 28.315224 seconds 00:33:21.889 00:33:21.889 Latency(us) 00:33:21.889 [2024-12-16T05:01:55.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.889 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:21.889 Verification LBA range: start 0x0 length 0x4000 00:33:21.889 Nvme0n1 : 28.31 10523.81 41.11 0.00 0.00 12143.42 397.90 3019898.88 00:33:21.889 [2024-12-16T05:01:55.745Z] =================================================================================================================== 00:33:21.889 [2024-12-16T05:01:55.745Z] Total : 10523.81 41.11 0.00 0.00 12143.42 397.90 3019898.88 00:33:21.889 [2024-12-16 06:01:55.552376] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:33:21.889 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.147 rmmod nvme_tcp 00:33:22.147 rmmod nvme_fabrics 00:33:22.147 rmmod nvme_keyring 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 3530879 ']' 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 3530879 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3530879 ']' 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3530879 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:22.147 06:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3530879 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3530879' 00:33:22.406 killing process with pid 3530879 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3530879 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3530879 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.406 06:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.940 00:33:24.940 real 0m39.495s 00:33:24.940 user 1m47.868s 00:33:24.940 sys 0m10.916s 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:24.940 ************************************ 00:33:24.940 END TEST nvmf_host_multipath_status 00:33:24.940 ************************************ 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.940 ************************************ 00:33:24.940 START TEST nvmf_discovery_remove_ifc 00:33:24.940 ************************************ 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:24.940 * Looking for test storage... 00:33:24.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:24.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.940 --rc genhtml_branch_coverage=1 00:33:24.940 --rc genhtml_function_coverage=1 00:33:24.940 --rc genhtml_legend=1 00:33:24.940 --rc geninfo_all_blocks=1 00:33:24.940 --rc geninfo_unexecuted_blocks=1 00:33:24.940 00:33:24.940 ' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:24.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.940 --rc genhtml_branch_coverage=1 00:33:24.940 --rc genhtml_function_coverage=1 00:33:24.940 --rc genhtml_legend=1 00:33:24.940 --rc geninfo_all_blocks=1 00:33:24.940 --rc geninfo_unexecuted_blocks=1 00:33:24.940 00:33:24.940 ' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:24.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.940 --rc genhtml_branch_coverage=1 00:33:24.940 --rc genhtml_function_coverage=1 00:33:24.940 --rc genhtml_legend=1 00:33:24.940 --rc geninfo_all_blocks=1 00:33:24.940 --rc geninfo_unexecuted_blocks=1 00:33:24.940 00:33:24.940 ' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:24.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.940 --rc genhtml_branch_coverage=1 00:33:24.940 --rc genhtml_function_coverage=1 00:33:24.940 --rc genhtml_legend=1 00:33:24.940 --rc geninfo_all_blocks=1 00:33:24.940 --rc geninfo_unexecuted_blocks=1 00:33:24.940 00:33:24.940 ' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:24.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.940 06:01:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:30.212 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:30.212 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:30.212 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:30.213 Found net devices under 0000:af:00.0: cvl_0_0 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:30.213 Found net devices under 0000:af:00.1: cvl_0_1 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.213 06:02:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:33:30.472 00:33:30.472 --- 10.0.0.2 ping statistics --- 00:33:30.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.472 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:33:30.472 00:33:30.472 --- 10.0.0.1 ping statistics --- 00:33:30.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.472 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:30.472 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=3539466 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 3539466 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3539466 ']' 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.731 [2024-12-16 06:02:04.403720] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:30.731 [2024-12-16 06:02:04.403763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.731 [2024-12-16 06:02:04.462714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.731 [2024-12-16 06:02:04.500567] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.731 [2024-12-16 06:02:04.500607] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.731 [2024-12-16 06:02:04.500616] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.731 [2024-12-16 06:02:04.500624] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.731 [2024-12-16 06:02:04.500630] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.731 [2024-12-16 06:02:04.500652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.731 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.989 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.989 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:30.989 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.989 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.989 [2024-12-16 06:02:04.632476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.989 [2024-12-16 06:02:04.640650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:30.989 null0 00:33:30.990 [2024-12-16 06:02:04.672635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3539486 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3539486 /tmp/host.sock 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3539486 ']' 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:30.990 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.990 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.990 [2024-12-16 06:02:04.742346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:30.990 [2024-12-16 06:02:04.742386] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539486 ] 00:33:30.990 [2024-12-16 06:02:04.797341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.990 [2024-12-16 06:02:04.836335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.248 06:02:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.623 [2024-12-16 06:02:06.040358] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:32.623 [2024-12-16 06:02:06.040381] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:32.623 [2024-12-16 06:02:06.040396] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:32.623 [2024-12-16 06:02:06.166788] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:32.623 [2024-12-16 06:02:06.270549] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:32.623 [2024-12-16 06:02:06.270595] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:32.623 [2024-12-16 06:02:06.270613] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:32.623 [2024-12-16 06:02:06.270628] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:32.623 [2024-12-16 06:02:06.270644] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:32.623 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.623 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:32.623 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.623 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.623 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.623 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.623 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.624 [2024-12-16 06:02:06.278029] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x805100 was disconnected and freed. delete nvme_qpair. 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:32.624 06:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.054 06:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.646 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.646 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.646 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.646 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.646 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.646 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.646 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.918 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.918 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.918 06:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.868 06:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.800 06:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.172 06:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.172 [2024-12-16 06:02:11.712171] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:38.172 [2024-12-16 06:02:11.712213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.172 [2024-12-16 06:02:11.712224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.172 [2024-12-16 06:02:11.712234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.172 [2024-12-16 06:02:11.712240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.172 [2024-12-16 06:02:11.712248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.172 [2024-12-16 06:02:11.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.172 [2024-12-16 06:02:11.712261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.172 [2024-12-16 06:02:11.712268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.172 [2024-12-16 06:02:11.712276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.172 [2024-12-16 06:02:11.712282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.172 [2024-12-16 06:02:11.712288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1980 is same with the state(6) to be set 00:33:38.172 [2024-12-16 06:02:11.722193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1980 (9): Bad file descriptor 00:33:38.172 [2024-12-16 06:02:11.732232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.105 [2024-12-16 06:02:12.735918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:39.105 [2024-12-16 06:02:12.735962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1980 with addr=10.0.0.2, port=4420 00:33:39.105 [2024-12-16 06:02:12.735978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1980 is same with the state(6) to be set 00:33:39.105 [2024-12-16 06:02:12.736005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1980 (9): Bad file descriptor 00:33:39.105 [2024-12-16 06:02:12.736453] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.105 [2024-12-16 06:02:12.736490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:39.105 [2024-12-16 06:02:12.736501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:39.105 [2024-12-16 06:02:12.736513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:39.105 [2024-12-16 06:02:12.736532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.105 [2024-12-16 06:02:12.736543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.105 06:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.037 [2024-12-16 06:02:13.739015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:40.037 [2024-12-16 06:02:13.739037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:40.037 [2024-12-16 06:02:13.739045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:40.038 [2024-12-16 06:02:13.739053] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:40.038 [2024-12-16 06:02:13.739065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.038 [2024-12-16 06:02:13.739082] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:40.038 [2024-12-16 06:02:13.739104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.038 [2024-12-16 06:02:13.739113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.038 [2024-12-16 06:02:13.739123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.038 [2024-12-16 06:02:13.739130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.038 [2024-12-16 06:02:13.739137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.038 [2024-12-16 06:02:13.739144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.038 [2024-12-16 06:02:13.739151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.038 [2024-12-16 06:02:13.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.038 [2024-12-16 06:02:13.739165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.038 [2024-12-16 06:02:13.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.038 [2024-12-16 06:02:13.739178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:40.038 [2024-12-16 06:02:13.739227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d1090 (9): Bad file descriptor 00:33:40.038 [2024-12-16 06:02:13.740246] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:40.038 [2024-12-16 06:02:13.740256] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.038 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:40.295 06:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.227 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.227 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.227 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.227 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.227 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.228 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.228 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.228 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.228 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:41.228 06:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.158 [2024-12-16 06:02:15.794007] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:42.158 [2024-12-16 06:02:15.794024] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:42.158 [2024-12-16 06:02:15.794038] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:42.158 [2024-12-16 06:02:15.880291] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.158 06:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.415 06:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:42.415 06:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.415 [2024-12-16 06:02:16.099941] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:42.415 [2024-12-16 06:02:16.099976] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:42.415 [2024-12-16 06:02:16.099993] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:42.415 [2024-12-16 06:02:16.100005] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:42.415 [2024-12-16 06:02:16.100011] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:42.415 [2024-12-16 06:02:16.102978] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7dd200 was disconnected and freed. delete nvme_qpair. 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3539486 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3539486 ']' 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3539486 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3539486 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3539486' 00:33:43.347 killing process with pid 3539486 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3539486 00:33:43.347 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3539486 00:33:43.605 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:43.605 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:43.605 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:43.605 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.605 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:43.605 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.605 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.605 rmmod nvme_tcp 00:33:43.605 rmmod nvme_fabrics 00:33:43.605 rmmod nvme_keyring 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 3539466 ']' 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 3539466 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3539466 ']' 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3539466 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3539466 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3539466' 00:33:43.606 killing process with pid 3539466 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3539466 00:33:43.606 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3539466 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.863 06:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:46.393 00:33:46.393 real 0m21.272s 00:33:46.393 user 0m26.567s 00:33:46.393 sys 0m5.642s 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:46.393 ************************************ 00:33:46.393 END TEST nvmf_discovery_remove_ifc 00:33:46.393 ************************************ 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.393 ************************************ 00:33:46.393 START TEST nvmf_identify_kernel_target 00:33:46.393 ************************************ 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:46.393 * Looking for test storage... 00:33:46.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.393 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:46.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.394 --rc genhtml_branch_coverage=1 00:33:46.394 --rc genhtml_function_coverage=1 00:33:46.394 --rc genhtml_legend=1 00:33:46.394 --rc geninfo_all_blocks=1 00:33:46.394 --rc geninfo_unexecuted_blocks=1 00:33:46.394 00:33:46.394 ' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:46.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.394 --rc genhtml_branch_coverage=1 00:33:46.394 --rc genhtml_function_coverage=1 00:33:46.394 --rc genhtml_legend=1 00:33:46.394 --rc geninfo_all_blocks=1 00:33:46.394 --rc geninfo_unexecuted_blocks=1 00:33:46.394 00:33:46.394 ' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:46.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.394 --rc genhtml_branch_coverage=1 00:33:46.394 --rc genhtml_function_coverage=1 00:33:46.394 --rc genhtml_legend=1 00:33:46.394 --rc geninfo_all_blocks=1 00:33:46.394 --rc geninfo_unexecuted_blocks=1 00:33:46.394 00:33:46.394 ' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:46.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.394 --rc genhtml_branch_coverage=1 00:33:46.394 --rc genhtml_function_coverage=1 00:33:46.394 --rc genhtml_legend=1 00:33:46.394 --rc geninfo_all_blocks=1 00:33:46.394 --rc geninfo_unexecuted_blocks=1 00:33:46.394 00:33:46.394 ' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:46.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:46.394 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:46.395 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.395 06:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.656 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:51.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:51.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:51.657 Found net devices under 0000:af:00.0: cvl_0_0 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:51.657 Found net devices under 0000:af:00.1: cvl_0_1 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.657 06:02:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:51.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:33:51.657 00:33:51.657 --- 10.0.0.2 ping statistics --- 00:33:51.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.657 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:33:51.657 00:33:51.657 --- 10.0.0.1 ping statistics --- 00:33:51.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.657 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:51.657 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:51.658 06:02:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:54.185 Waiting for block devices as requested 00:33:54.185 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:54.185 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:54.185 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:54.185 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:54.443 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:54.443 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:54.443 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:54.443 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:54.701 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:54.701 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:54.701 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:54.701 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:54.958 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:54.958 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:54.959 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:55.220 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:55.220 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:55.220 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:33:55.220 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:55.220 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:33:55.220 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:55.220 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:55.221 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:55.221 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:33:55.221 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:55.221 06:02:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:55.221 No valid GPT data, bailing 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:33:55.221 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:55.480 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:55.480 00:33:55.480 Discovery Log Number of Records 2, Generation counter 2 00:33:55.480 =====Discovery Log Entry 0====== 00:33:55.480 trtype: tcp 00:33:55.480 adrfam: ipv4 00:33:55.480 subtype: current discovery subsystem 00:33:55.480 treq: not specified, sq flow control disable supported 00:33:55.480 portid: 1 00:33:55.480 trsvcid: 4420 00:33:55.480 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:55.480 traddr: 10.0.0.1 00:33:55.480 eflags: none 00:33:55.480 sectype: none 00:33:55.480 =====Discovery Log Entry 1====== 00:33:55.480 trtype: tcp 00:33:55.480 adrfam: ipv4 00:33:55.480 subtype: nvme subsystem 00:33:55.480 treq: not specified, sq flow control disable supported 00:33:55.480 portid: 1 00:33:55.480 trsvcid: 4420 00:33:55.480 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:55.480 traddr: 10.0.0.1 00:33:55.480 eflags: none 00:33:55.480 sectype: none 00:33:55.480 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:55.480 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:55.480 ===================================================== 00:33:55.480 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:55.480 ===================================================== 00:33:55.480 Controller Capabilities/Features 00:33:55.480 ================================ 00:33:55.480 Vendor ID: 0000 00:33:55.480 Subsystem Vendor ID: 0000 00:33:55.480 Serial Number: 97a88917af69dde6b4d7 00:33:55.480 Model Number: Linux 00:33:55.480 Firmware Version: 6.8.9-20 00:33:55.480 Recommended Arb Burst: 0 00:33:55.480 IEEE OUI Identifier: 00 00 00 00:33:55.480 Multi-path I/O 00:33:55.480 May have multiple subsystem ports: No 00:33:55.480 May have multiple controllers: No 00:33:55.480 Associated with SR-IOV VF: No 00:33:55.480 Max Data Transfer Size: Unlimited 00:33:55.480 Max Number of Namespaces: 0 00:33:55.480 Max Number of I/O Queues: 1024 00:33:55.480 NVMe Specification Version (VS): 1.3 00:33:55.480 NVMe Specification Version (Identify): 1.3 00:33:55.480 Maximum Queue Entries: 1024 00:33:55.480 Contiguous Queues Required: No 00:33:55.480 Arbitration Mechanisms Supported 00:33:55.480 Weighted Round Robin: Not Supported 00:33:55.480 Vendor Specific: Not Supported 00:33:55.480 Reset Timeout: 7500 ms 00:33:55.480 Doorbell Stride: 4 bytes 00:33:55.480 NVM Subsystem Reset: Not Supported 00:33:55.480 Command Sets Supported 00:33:55.480 NVM Command Set: Supported 00:33:55.480 Boot Partition: Not Supported 00:33:55.480 Memory Page Size Minimum: 4096 bytes 00:33:55.480 Memory Page Size Maximum: 4096 bytes 00:33:55.480 Persistent Memory Region: Not Supported 00:33:55.480 Optional Asynchronous Events Supported 00:33:55.480 Namespace Attribute Notices: Not Supported 00:33:55.480 Firmware Activation Notices: Not Supported 00:33:55.480 ANA Change Notices: Not Supported 00:33:55.480 PLE Aggregate Log Change Notices: Not Supported 00:33:55.480 LBA Status Info Alert Notices: Not Supported 00:33:55.480 EGE Aggregate Log Change Notices: Not Supported 00:33:55.480 Normal NVM Subsystem Shutdown event: Not Supported 00:33:55.480 Zone Descriptor Change Notices: Not Supported 00:33:55.480 Discovery Log Change Notices: Supported 00:33:55.480 Controller Attributes 00:33:55.480 128-bit Host Identifier: Not Supported 00:33:55.480 Non-Operational Permissive Mode: Not Supported 00:33:55.480 NVM Sets: Not Supported 00:33:55.480 Read Recovery Levels: Not Supported 00:33:55.480 Endurance Groups: Not Supported 00:33:55.480 Predictable Latency Mode: Not Supported 00:33:55.480 Traffic Based Keep ALive: Not Supported 00:33:55.480 Namespace Granularity: Not Supported 00:33:55.480 SQ Associations: Not Supported 00:33:55.480 UUID List: Not Supported 00:33:55.480 Multi-Domain Subsystem: Not Supported 00:33:55.480 Fixed Capacity Management: Not Supported 00:33:55.480 Variable Capacity Management: Not Supported 00:33:55.480 Delete Endurance Group: Not Supported 00:33:55.480 Delete NVM Set: Not Supported 00:33:55.480 Extended LBA Formats Supported: Not Supported 00:33:55.480 Flexible Data Placement Supported: Not Supported 00:33:55.480 00:33:55.480 Controller Memory Buffer Support 00:33:55.480 ================================ 00:33:55.480 Supported: No 00:33:55.480 00:33:55.480 Persistent Memory Region Support 00:33:55.480 ================================ 00:33:55.480 Supported: No 00:33:55.480 00:33:55.480 Admin Command Set Attributes 00:33:55.480 ============================ 00:33:55.480 Security Send/Receive: Not Supported 00:33:55.480 Format NVM: Not Supported 00:33:55.480 Firmware Activate/Download: Not Supported 00:33:55.480 Namespace Management: Not Supported 00:33:55.480 Device Self-Test: Not Supported 00:33:55.480 Directives: Not Supported 00:33:55.480 NVMe-MI: Not Supported 00:33:55.480 Virtualization Management: Not Supported 00:33:55.480 Doorbell Buffer Config: Not Supported 00:33:55.480 Get LBA Status Capability: Not Supported 00:33:55.480 Command & Feature Lockdown Capability: Not Supported 00:33:55.480 Abort Command Limit: 1 00:33:55.480 Async Event Request Limit: 1 00:33:55.480 Number of Firmware Slots: N/A 00:33:55.480 Firmware Slot 1 Read-Only: N/A 00:33:55.480 Firmware Activation Without Reset: N/A 00:33:55.480 Multiple Update Detection Support: N/A 00:33:55.480 Firmware Update Granularity: No Information Provided 00:33:55.480 Per-Namespace SMART Log: No 00:33:55.480 Asymmetric Namespace Access Log Page: Not Supported 00:33:55.480 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:55.480 Command Effects Log Page: Not Supported 00:33:55.480 Get Log Page Extended Data: Supported 00:33:55.480 Telemetry Log Pages: Not Supported 00:33:55.480 Persistent Event Log Pages: Not Supported 00:33:55.480 Supported Log Pages Log Page: May Support 00:33:55.480 Commands Supported & Effects Log Page: Not Supported 00:33:55.480 Feature Identifiers & Effects Log Page:May Support 00:33:55.480 NVMe-MI Commands & Effects Log Page: May Support 00:33:55.480 Data Area 4 for Telemetry Log: Not Supported 00:33:55.480 Error Log Page Entries Supported: 1 00:33:55.480 Keep Alive: Not Supported 00:33:55.480 00:33:55.480 NVM Command Set Attributes 00:33:55.480 ========================== 00:33:55.480 Submission Queue Entry Size 00:33:55.480 Max: 1 00:33:55.480 Min: 1 00:33:55.480 Completion Queue Entry Size 00:33:55.480 Max: 1 00:33:55.480 Min: 1 00:33:55.480 Number of Namespaces: 0 00:33:55.480 Compare Command: Not Supported 00:33:55.480 Write Uncorrectable Command: Not Supported 00:33:55.480 Dataset Management Command: Not Supported 00:33:55.480 Write Zeroes Command: Not Supported 00:33:55.480 Set Features Save Field: Not Supported 00:33:55.480 Reservations: Not Supported 00:33:55.480 Timestamp: Not Supported 00:33:55.480 Copy: Not Supported 00:33:55.480 Volatile Write Cache: Not Present 00:33:55.480 Atomic Write Unit (Normal): 1 00:33:55.480 Atomic Write Unit (PFail): 1 00:33:55.480 Atomic Compare & Write Unit: 1 00:33:55.480 Fused Compare & Write: Not Supported 00:33:55.480 Scatter-Gather List 00:33:55.480 SGL Command Set: Supported 00:33:55.480 SGL Keyed: Not Supported 00:33:55.480 SGL Bit Bucket Descriptor: Not Supported 00:33:55.480 SGL Metadata Pointer: Not Supported 00:33:55.480 Oversized SGL: Not Supported 00:33:55.480 SGL Metadata Address: Not Supported 00:33:55.480 SGL Offset: Supported 00:33:55.480 Transport SGL Data Block: Not Supported 00:33:55.480 Replay Protected Memory Block: Not Supported 00:33:55.480 00:33:55.481 Firmware Slot Information 00:33:55.481 ========================= 00:33:55.481 Active slot: 0 00:33:55.481 00:33:55.481 00:33:55.481 Error Log 00:33:55.481 ========= 00:33:55.481 00:33:55.481 Active Namespaces 00:33:55.481 ================= 00:33:55.481 Discovery Log Page 00:33:55.481 ================== 00:33:55.481 Generation Counter: 2 00:33:55.481 Number of Records: 2 00:33:55.481 Record Format: 0 00:33:55.481 00:33:55.481 Discovery Log Entry 0 00:33:55.481 ---------------------- 00:33:55.481 Transport Type: 3 (TCP) 00:33:55.481 Address Family: 1 (IPv4) 00:33:55.481 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:55.481 Entry Flags: 00:33:55.481 Duplicate Returned Information: 0 00:33:55.481 Explicit Persistent Connection Support for Discovery: 0 00:33:55.481 Transport Requirements: 00:33:55.481 Secure Channel: Not Specified 00:33:55.481 Port ID: 1 (0x0001) 00:33:55.481 Controller ID: 65535 (0xffff) 00:33:55.481 Admin Max SQ Size: 32 00:33:55.481 Transport Service Identifier: 4420 00:33:55.481 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:55.481 Transport Address: 10.0.0.1 00:33:55.481 Discovery Log Entry 1 00:33:55.481 ---------------------- 00:33:55.481 Transport Type: 3 (TCP) 00:33:55.481 Address Family: 1 (IPv4) 00:33:55.481 Subsystem Type: 2 (NVM Subsystem) 00:33:55.481 Entry Flags: 00:33:55.481 Duplicate Returned Information: 0 00:33:55.481 Explicit Persistent Connection Support for Discovery: 0 00:33:55.481 Transport Requirements: 00:33:55.481 Secure Channel: Not Specified 00:33:55.481 Port ID: 1 (0x0001) 00:33:55.481 Controller ID: 65535 (0xffff) 00:33:55.481 Admin Max SQ Size: 32 00:33:55.481 Transport Service Identifier: 4420 00:33:55.481 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:55.481 Transport Address: 10.0.0.1 00:33:55.481 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:55.481 get_feature(0x01) failed 00:33:55.481 get_feature(0x02) failed 00:33:55.481 get_feature(0x04) failed 00:33:55.481 ===================================================== 00:33:55.481 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:55.481 ===================================================== 00:33:55.481 Controller Capabilities/Features 00:33:55.481 ================================ 00:33:55.481 Vendor ID: 0000 00:33:55.481 Subsystem Vendor ID: 0000 00:33:55.481 Serial Number: d9a6412d64806d23e4fe 00:33:55.481 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:55.481 Firmware Version: 6.8.9-20 00:33:55.481 Recommended Arb Burst: 6 00:33:55.481 IEEE OUI Identifier: 00 00 00 00:33:55.481 Multi-path I/O 00:33:55.481 May have multiple subsystem ports: Yes 00:33:55.481 May have multiple controllers: Yes 00:33:55.481 Associated with SR-IOV VF: No 00:33:55.481 Max Data Transfer Size: Unlimited 00:33:55.481 Max Number of Namespaces: 1024 00:33:55.481 Max Number of I/O Queues: 128 00:33:55.481 NVMe Specification Version (VS): 1.3 00:33:55.481 NVMe Specification Version (Identify): 1.3 00:33:55.481 Maximum Queue Entries: 1024 00:33:55.481 Contiguous Queues Required: No 00:33:55.481 Arbitration Mechanisms Supported 00:33:55.481 Weighted Round Robin: Not Supported 00:33:55.481 Vendor Specific: Not Supported 00:33:55.481 Reset Timeout: 7500 ms 00:33:55.481 Doorbell Stride: 4 bytes 00:33:55.481 NVM Subsystem Reset: Not Supported 00:33:55.481 Command Sets Supported 00:33:55.481 NVM Command Set: Supported 00:33:55.481 Boot Partition: Not Supported 00:33:55.481 Memory Page Size Minimum: 4096 bytes 00:33:55.481 Memory Page Size Maximum: 4096 bytes 00:33:55.481 Persistent Memory Region: Not Supported 00:33:55.481 Optional Asynchronous Events Supported 00:33:55.481 Namespace Attribute Notices: Supported 00:33:55.481 Firmware Activation Notices: Not Supported 00:33:55.481 ANA Change Notices: Supported 00:33:55.481 PLE Aggregate Log Change Notices: Not Supported 00:33:55.481 LBA Status Info Alert Notices: Not Supported 00:33:55.481 EGE Aggregate Log Change Notices: Not Supported 00:33:55.481 Normal NVM Subsystem Shutdown event: Not Supported 00:33:55.481 Zone Descriptor Change Notices: Not Supported 00:33:55.481 Discovery Log Change Notices: Not Supported 00:33:55.481 Controller Attributes 00:33:55.481 128-bit Host Identifier: Supported 00:33:55.481 Non-Operational Permissive Mode: Not Supported 00:33:55.481 NVM Sets: Not Supported 00:33:55.481 Read Recovery Levels: Not Supported 00:33:55.481 Endurance Groups: Not Supported 00:33:55.481 Predictable Latency Mode: Not Supported 00:33:55.481 Traffic Based Keep ALive: Supported 00:33:55.481 Namespace Granularity: Not Supported 00:33:55.481 SQ Associations: Not Supported 00:33:55.481 UUID List: Not Supported 00:33:55.481 Multi-Domain Subsystem: Not Supported 00:33:55.481 Fixed Capacity Management: Not Supported 00:33:55.481 Variable Capacity Management: Not Supported 00:33:55.481 Delete Endurance Group: Not Supported 00:33:55.481 Delete NVM Set: Not Supported 00:33:55.481 Extended LBA Formats Supported: Not Supported 00:33:55.481 Flexible Data Placement Supported: Not Supported 00:33:55.481 00:33:55.481 Controller Memory Buffer Support 00:33:55.481 ================================ 00:33:55.481 Supported: No 00:33:55.481 00:33:55.481 Persistent Memory Region Support 00:33:55.481 ================================ 00:33:55.481 Supported: No 00:33:55.481 00:33:55.481 Admin Command Set Attributes 00:33:55.481 ============================ 00:33:55.481 Security Send/Receive: Not Supported 00:33:55.481 Format NVM: Not Supported 00:33:55.481 Firmware Activate/Download: Not Supported 00:33:55.481 Namespace Management: Not Supported 00:33:55.481 Device Self-Test: Not Supported 00:33:55.481 Directives: Not Supported 00:33:55.481 NVMe-MI: Not Supported 00:33:55.481 Virtualization Management: Not Supported 00:33:55.481 Doorbell Buffer Config: Not Supported 00:33:55.481 Get LBA Status Capability: Not Supported 00:33:55.481 Command & Feature Lockdown Capability: Not Supported 00:33:55.481 Abort Command Limit: 4 00:33:55.481 Async Event Request Limit: 4 00:33:55.481 Number of Firmware Slots: N/A 00:33:55.481 Firmware Slot 1 Read-Only: N/A 00:33:55.481 Firmware Activation Without Reset: N/A 00:33:55.481 Multiple Update Detection Support: N/A 00:33:55.481 Firmware Update Granularity: No Information Provided 00:33:55.481 Per-Namespace SMART Log: Yes 00:33:55.481 Asymmetric Namespace Access Log Page: Supported 00:33:55.481 ANA Transition Time : 10 sec 00:33:55.481 00:33:55.481 Asymmetric Namespace Access Capabilities 00:33:55.481 ANA Optimized State : Supported 00:33:55.481 ANA Non-Optimized State : Supported 00:33:55.481 ANA Inaccessible State : Supported 00:33:55.481 ANA Persistent Loss State : Supported 00:33:55.481 ANA Change State : Supported 00:33:55.481 ANAGRPID is not changed : No 00:33:55.481 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:55.481 00:33:55.481 ANA Group Identifier Maximum : 128 00:33:55.481 Number of ANA Group Identifiers : 128 00:33:55.481 Max Number of Allowed Namespaces : 1024 00:33:55.481 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:55.481 Command Effects Log Page: Supported 00:33:55.481 Get Log Page Extended Data: Supported 00:33:55.481 Telemetry Log Pages: Not Supported 00:33:55.481 Persistent Event Log Pages: Not Supported 00:33:55.481 Supported Log Pages Log Page: May Support 00:33:55.481 Commands Supported & Effects Log Page: Not Supported 00:33:55.481 Feature Identifiers & Effects Log Page:May Support 00:33:55.481 NVMe-MI Commands & Effects Log Page: May Support 00:33:55.481 Data Area 4 for Telemetry Log: Not Supported 00:33:55.481 Error Log Page Entries Supported: 128 00:33:55.481 Keep Alive: Supported 00:33:55.481 Keep Alive Granularity: 1000 ms 00:33:55.481 00:33:55.481 NVM Command Set Attributes 00:33:55.481 ========================== 00:33:55.481 Submission Queue Entry Size 00:33:55.481 Max: 64 00:33:55.481 Min: 64 00:33:55.481 Completion Queue Entry Size 00:33:55.481 Max: 16 00:33:55.481 Min: 16 00:33:55.481 Number of Namespaces: 1024 00:33:55.481 Compare Command: Not Supported 00:33:55.481 Write Uncorrectable Command: Not Supported 00:33:55.481 Dataset Management Command: Supported 00:33:55.481 Write Zeroes Command: Supported 00:33:55.481 Set Features Save Field: Not Supported 00:33:55.481 Reservations: Not Supported 00:33:55.481 Timestamp: Not Supported 00:33:55.481 Copy: Not Supported 00:33:55.481 Volatile Write Cache: Present 00:33:55.481 Atomic Write Unit (Normal): 1 00:33:55.482 Atomic Write Unit (PFail): 1 00:33:55.482 Atomic Compare & Write Unit: 1 00:33:55.482 Fused Compare & Write: Not Supported 00:33:55.482 Scatter-Gather List 00:33:55.482 SGL Command Set: Supported 00:33:55.482 SGL Keyed: Not Supported 00:33:55.482 SGL Bit Bucket Descriptor: Not Supported 00:33:55.482 SGL Metadata Pointer: Not Supported 00:33:55.482 Oversized SGL: Not Supported 00:33:55.482 SGL Metadata Address: Not Supported 00:33:55.482 SGL Offset: Supported 00:33:55.482 Transport SGL Data Block: Not Supported 00:33:55.482 Replay Protected Memory Block: Not Supported 00:33:55.482 00:33:55.482 Firmware Slot Information 00:33:55.482 ========================= 00:33:55.482 Active slot: 0 00:33:55.482 00:33:55.482 Asymmetric Namespace Access 00:33:55.482 =========================== 00:33:55.482 Change Count : 0 00:33:55.482 Number of ANA Group Descriptors : 1 00:33:55.482 ANA Group Descriptor : 0 00:33:55.482 ANA Group ID : 1 00:33:55.482 Number of NSID Values : 1 00:33:55.482 Change Count : 0 00:33:55.482 ANA State : 1 00:33:55.482 Namespace Identifier : 1 00:33:55.482 00:33:55.482 Commands Supported and Effects 00:33:55.482 ============================== 00:33:55.482 Admin Commands 00:33:55.482 -------------- 00:33:55.482 Get Log Page (02h): Supported 00:33:55.482 Identify (06h): Supported 00:33:55.482 Abort (08h): Supported 00:33:55.482 Set Features (09h): Supported 00:33:55.482 Get Features (0Ah): Supported 00:33:55.482 Asynchronous Event Request (0Ch): Supported 00:33:55.482 Keep Alive (18h): Supported 00:33:55.482 I/O Commands 00:33:55.482 ------------ 00:33:55.482 Flush (00h): Supported 00:33:55.482 Write (01h): Supported LBA-Change 00:33:55.482 Read (02h): Supported 00:33:55.482 Write Zeroes (08h): Supported LBA-Change 00:33:55.482 Dataset Management (09h): Supported 00:33:55.482 00:33:55.482 Error Log 00:33:55.482 ========= 00:33:55.482 Entry: 0 00:33:55.482 Error Count: 0x3 00:33:55.482 Submission Queue Id: 0x0 00:33:55.482 Command Id: 0x5 00:33:55.482 Phase Bit: 0 00:33:55.482 Status Code: 0x2 00:33:55.482 Status Code Type: 0x0 00:33:55.482 Do Not Retry: 1 00:33:55.482 Error Location: 0x28 00:33:55.482 LBA: 0x0 00:33:55.482 Namespace: 0x0 00:33:55.482 Vendor Log Page: 0x0 00:33:55.482 ----------- 00:33:55.482 Entry: 1 00:33:55.482 Error Count: 0x2 00:33:55.482 Submission Queue Id: 0x0 00:33:55.482 Command Id: 0x5 00:33:55.482 Phase Bit: 0 00:33:55.482 Status Code: 0x2 00:33:55.482 Status Code Type: 0x0 00:33:55.482 Do Not Retry: 1 00:33:55.482 Error Location: 0x28 00:33:55.482 LBA: 0x0 00:33:55.482 Namespace: 0x0 00:33:55.482 Vendor Log Page: 0x0 00:33:55.482 ----------- 00:33:55.482 Entry: 2 00:33:55.482 Error Count: 0x1 00:33:55.482 Submission Queue Id: 0x0 00:33:55.482 Command Id: 0x4 00:33:55.482 Phase Bit: 0 00:33:55.482 Status Code: 0x2 00:33:55.482 Status Code Type: 0x0 00:33:55.482 Do Not Retry: 1 00:33:55.482 Error Location: 0x28 00:33:55.482 LBA: 0x0 00:33:55.482 Namespace: 0x0 00:33:55.482 Vendor Log Page: 0x0 00:33:55.482 00:33:55.482 Number of Queues 00:33:55.482 ================ 00:33:55.482 Number of I/O Submission Queues: 128 00:33:55.482 Number of I/O Completion Queues: 128 00:33:55.482 00:33:55.482 ZNS Specific Controller Data 00:33:55.482 ============================ 00:33:55.482 Zone Append Size Limit: 0 00:33:55.482 00:33:55.482 00:33:55.482 Active Namespaces 00:33:55.482 ================= 00:33:55.482 get_feature(0x05) failed 00:33:55.482 Namespace ID:1 00:33:55.482 Command Set Identifier: NVM (00h) 00:33:55.482 Deallocate: Supported 00:33:55.482 Deallocated/Unwritten Error: Not Supported 00:33:55.482 Deallocated Read Value: Unknown 00:33:55.482 Deallocate in Write Zeroes: Not Supported 00:33:55.482 Deallocated Guard Field: 0xFFFF 00:33:55.482 Flush: Supported 00:33:55.482 Reservation: Not Supported 00:33:55.482 Namespace Sharing Capabilities: Multiple Controllers 00:33:55.482 Size (in LBAs): 1953525168 (931GiB) 00:33:55.482 Capacity (in LBAs): 1953525168 (931GiB) 00:33:55.482 Utilization (in LBAs): 1953525168 (931GiB) 00:33:55.482 UUID: 0b1225c6-b37a-4bca-b69a-ca41ba742801 00:33:55.482 Thin Provisioning: Not Supported 00:33:55.482 Per-NS Atomic Units: Yes 00:33:55.482 Atomic Boundary Size (Normal): 0 00:33:55.482 Atomic Boundary Size (PFail): 0 00:33:55.482 Atomic Boundary Offset: 0 00:33:55.482 NGUID/EUI64 Never Reused: No 00:33:55.482 ANA group ID: 1 00:33:55.482 Namespace Write Protected: No 00:33:55.482 Number of LBA Formats: 1 00:33:55.482 Current LBA Format: LBA Format #00 00:33:55.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:55.482 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.482 rmmod nvme_tcp 00:33:55.482 rmmod nvme_fabrics 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:55.482 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:33:55.740 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.740 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.740 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.741 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.741 06:02:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:33:57.641 06:02:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:00.924 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:00.924 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:01.183 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:01.447 00:34:01.447 real 0m15.414s 00:34:01.447 user 0m3.744s 00:34:01.447 sys 0m7.924s 00:34:01.447 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:01.447 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:01.447 ************************************ 00:34:01.447 END TEST nvmf_identify_kernel_target 00:34:01.447 ************************************ 00:34:01.447 06:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:01.447 06:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:01.447 06:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:01.447 06:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.447 ************************************ 00:34:01.447 START TEST nvmf_auth_host 00:34:01.447 ************************************ 00:34:01.447 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:01.447 * Looking for test storage... 00:34:01.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.707 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:01.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.708 --rc genhtml_branch_coverage=1 00:34:01.708 --rc genhtml_function_coverage=1 00:34:01.708 --rc genhtml_legend=1 00:34:01.708 --rc geninfo_all_blocks=1 00:34:01.708 --rc geninfo_unexecuted_blocks=1 00:34:01.708 00:34:01.708 ' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:01.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.708 --rc genhtml_branch_coverage=1 00:34:01.708 --rc genhtml_function_coverage=1 00:34:01.708 --rc genhtml_legend=1 00:34:01.708 --rc geninfo_all_blocks=1 00:34:01.708 --rc geninfo_unexecuted_blocks=1 00:34:01.708 00:34:01.708 ' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:01.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.708 --rc genhtml_branch_coverage=1 00:34:01.708 --rc genhtml_function_coverage=1 00:34:01.708 --rc genhtml_legend=1 00:34:01.708 --rc geninfo_all_blocks=1 00:34:01.708 --rc geninfo_unexecuted_blocks=1 00:34:01.708 00:34:01.708 ' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:01.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.708 --rc genhtml_branch_coverage=1 00:34:01.708 --rc genhtml_function_coverage=1 00:34:01.708 --rc genhtml_legend=1 00:34:01.708 --rc geninfo_all_blocks=1 00:34:01.708 --rc geninfo_unexecuted_blocks=1 00:34:01.708 00:34:01.708 ' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:01.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:01.708 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:01.709 06:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:06.978 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:06.978 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:06.978 Found net devices under 0000:af:00.0: cvl_0_0 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:06.978 Found net devices under 0000:af:00.1: cvl_0_1 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:06.978 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:06.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:06.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:34:06.979 00:34:06.979 --- 10.0.0.2 ping statistics --- 00:34:06.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.979 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:06.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:06.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:34:06.979 00:34:06.979 --- 10.0.0.1 ping statistics --- 00:34:06.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.979 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=3551118 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 3551118 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3551118 ']' 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:06.979 06:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=f231f0185b39ab4f5ad3592dc9549dd8 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Zet 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key f231f0185b39ab4f5ad3592dc9549dd8 0 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 f231f0185b39ab4f5ad3592dc9549dd8 0 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=f231f0185b39ab4f5ad3592dc9549dd8 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:07.238 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Zet 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Zet 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Zet 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b7811f752ca20a8a34a21521c885c49729d8852412ca4b97104e551095c51bce 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.j8g 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b7811f752ca20a8a34a21521c885c49729d8852412ca4b97104e551095c51bce 3 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b7811f752ca20a8a34a21521c885c49729d8852412ca4b97104e551095c51bce 3 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b7811f752ca20a8a34a21521c885c49729d8852412ca4b97104e551095c51bce 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.j8g 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.j8g 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.j8g 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7e12bd2bb860543feb7cca89a4128507c28e8dd8c66ccfd6 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.90Y 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7e12bd2bb860543feb7cca89a4128507c28e8dd8c66ccfd6 0 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7e12bd2bb860543feb7cca89a4128507c28e8dd8c66ccfd6 0 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7e12bd2bb860543feb7cca89a4128507c28e8dd8c66ccfd6 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.90Y 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.90Y 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.90Y 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:07.498 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=180f0c4d24c01e73e41d4b068d763915246079ae7659d5f1 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Mu0 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 180f0c4d24c01e73e41d4b068d763915246079ae7659d5f1 2 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 180f0c4d24c01e73e41d4b068d763915246079ae7659d5f1 2 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=180f0c4d24c01e73e41d4b068d763915246079ae7659d5f1 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Mu0 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Mu0 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Mu0 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ea5739a8d6d643c1edd1bd61f6a9a6f2 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.4pJ 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ea5739a8d6d643c1edd1bd61f6a9a6f2 1 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ea5739a8d6d643c1edd1bd61f6a9a6f2 1 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ea5739a8d6d643c1edd1bd61f6a9a6f2 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:34:07.499 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.4pJ 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.4pJ 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.4pJ 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=0414f8a6249618f53c553a5846e6e2c1 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.mem 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 0414f8a6249618f53c553a5846e6e2c1 1 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 0414f8a6249618f53c553a5846e6e2c1 1 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=0414f8a6249618f53c553a5846e6e2c1 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.mem 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.mem 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mem 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=00a19114c42fcb541955bc1e190fb5e1894937dc24b0693e 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.gbO 00:34:07.758 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 00a19114c42fcb541955bc1e190fb5e1894937dc24b0693e 2 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 00a19114c42fcb541955bc1e190fb5e1894937dc24b0693e 2 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=00a19114c42fcb541955bc1e190fb5e1894937dc24b0693e 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.gbO 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.gbO 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gbO 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5b7b919675b47b8799fe986912432dd6 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.fHZ 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5b7b919675b47b8799fe986912432dd6 0 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5b7b919675b47b8799fe986912432dd6 0 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5b7b919675b47b8799fe986912432dd6 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.fHZ 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.fHZ 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fHZ 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ef8f453f3347d86b20520046f1bb2e8a8be38d081ff168c01495889dc2b49595 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.DsJ 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ef8f453f3347d86b20520046f1bb2e8a8be38d081ff168c01495889dc2b49595 3 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ef8f453f3347d86b20520046f1bb2e8a8be38d081ff168c01495889dc2b49595 3 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ef8f453f3347d86b20520046f1bb2e8a8be38d081ff168c01495889dc2b49595 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:34:07.759 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.DsJ 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.DsJ 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DsJ 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3551118 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3551118 ']' 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Zet 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.j8g ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.j8g 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.90Y 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Mu0 ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Mu0 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4pJ 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mem ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mem 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gbO 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.018 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fHZ ]] 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fHZ 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DsJ 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:08.277 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:08.278 06:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:10.808 Waiting for block devices as requested 00:34:10.808 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:11.066 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.066 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.066 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.066 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:11.324 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:11.324 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:11.324 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:11.324 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:11.582 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:11.582 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:11.582 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:11.582 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:11.840 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:11.840 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:11.840 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:12.097 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:12.664 No valid GPT data, bailing 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:12.664 00:34:12.664 Discovery Log Number of Records 2, Generation counter 2 00:34:12.664 =====Discovery Log Entry 0====== 00:34:12.664 trtype: tcp 00:34:12.664 adrfam: ipv4 00:34:12.664 subtype: current discovery subsystem 00:34:12.664 treq: not specified, sq flow control disable supported 00:34:12.664 portid: 1 00:34:12.664 trsvcid: 4420 00:34:12.664 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:12.664 traddr: 10.0.0.1 00:34:12.664 eflags: none 00:34:12.664 sectype: none 00:34:12.664 =====Discovery Log Entry 1====== 00:34:12.664 trtype: tcp 00:34:12.664 adrfam: ipv4 00:34:12.664 subtype: nvme subsystem 00:34:12.664 treq: not specified, sq flow control disable supported 00:34:12.664 portid: 1 00:34:12.664 trsvcid: 4420 00:34:12.664 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:12.664 traddr: 10.0.0.1 00:34:12.664 eflags: none 00:34:12.664 sectype: none 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.664 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.923 nvme0n1 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.923 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.182 nvme0n1 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.182 06:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.440 nvme0n1 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.440 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.441 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.699 nvme0n1 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.699 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.958 nvme0n1 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.958 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.959 nvme0n1 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.959 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.217 06:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.217 nvme0n1 00:34:14.217 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.217 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.217 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.217 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.217 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.217 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.475 nvme0n1 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.475 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.734 nvme0n1 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.734 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.992 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.992 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.992 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:14.992 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.992 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.993 nvme0n1 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.993 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.251 06:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.251 nvme0n1 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.251 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.510 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.768 nvme0n1 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.768 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.769 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.027 nvme0n1 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.027 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.028 06:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.286 nvme0n1 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.286 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.544 nvme0n1 00:34:16.544 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.544 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.544 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.544 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.544 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.544 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.802 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.060 nvme0n1 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.060 06:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.317 nvme0n1 00:34:17.317 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.317 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.317 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.317 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.317 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.575 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.576 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.576 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.576 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.576 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.576 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.576 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.833 nvme0n1 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.833 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.834 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.834 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.834 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.834 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.834 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.834 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:18.091 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.091 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.091 06:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.348 nvme0n1 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.348 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.912 nvme0n1 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.912 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.170 nvme0n1 00:34:19.170 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.170 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.170 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.170 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.170 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.170 06:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.170 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.427 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.428 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.992 nvme0n1 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:19.992 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.993 06:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.558 nvme0n1 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.558 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.170 nvme0n1 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.170 06:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.833 nvme0n1 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.833 06:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.397 nvme0n1 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.397 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.654 nvme0n1 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.654 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.655 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.912 nvme0n1 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.912 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.913 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.170 nvme0n1 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:23.170 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.171 06:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.428 nvme0n1 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.428 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.429 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.686 nvme0n1 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.686 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.687 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.687 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.687 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.944 nvme0n1 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.944 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.202 nvme0n1 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.202 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.203 06:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.460 nvme0n1 00:34:24.460 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.460 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.460 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.460 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.460 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.460 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.460 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.461 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.719 nvme0n1 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.719 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.977 nvme0n1 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.977 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.235 nvme0n1 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.235 06:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.235 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.493 nvme0n1 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.493 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.750 nvme0n1 00:34:25.750 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.750 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.750 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.750 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.750 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.008 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.266 nvme0n1 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.266 06:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.266 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.266 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.524 nvme0n1 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.524 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 nvme0n1 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.090 06:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.347 nvme0n1 00:34:27.347 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.347 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.347 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.347 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.347 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.347 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.605 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 nvme0n1 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.863 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.864 06:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.429 nvme0n1 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.429 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.995 nvme0n1 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.995 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.996 06:03:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.562 nvme0n1 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.562 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.129 nvme0n1 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.129 06:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.695 nvme0n1 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.695 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.953 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.953 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.953 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:30.953 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:30.953 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.954 06:03:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.520 nvme0n1 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.520 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.086 nvme0n1 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.086 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.345 nvme0n1 00:34:32.345 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.345 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.345 06:03:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:32.345 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.346 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.604 nvme0n1 00:34:32.604 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.604 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.604 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.605 nvme0n1 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.605 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.863 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.864 nvme0n1 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.864 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.122 nvme0n1 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.122 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.123 06:03:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.381 nvme0n1 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:33.381 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.382 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.640 nvme0n1 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.640 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.641 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.899 nvme0n1 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.899 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.157 nvme0n1 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.157 06:03:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.416 nvme0n1 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.416 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.674 nvme0n1 00:34:34.674 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.675 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.933 nvme0n1 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.933 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.190 06:03:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.190 nvme0n1 00:34:35.190 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.190 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.190 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.190 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.190 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.190 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.447 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.448 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 nvme0n1 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.706 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.964 nvme0n1 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:35.964 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.965 06:03:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.530 nvme0n1 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.530 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.788 nvme0n1 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.788 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.046 06:03:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.305 nvme0n1 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.305 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.872 nvme0n1 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.872 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.130 nvme0n1 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIzMWYwMTg1YjM5YWI0ZjVhZDM1OTJkYzk1NDlkZDjxIS3W: 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: ]] 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc4MTFmNzUyY2EyMGE4YTM0YTIxNTIxYzg4NWM0OTcyOWQ4ODUyNDEyY2E0Yjk3MTA0ZTU1MTA5NWM1MWJjZTBYvb4=: 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.130 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.388 06:03:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.954 nvme0n1 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.954 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.955 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.955 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.955 06:03:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.520 nvme0n1 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:39.520 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:39.521 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.521 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.521 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.087 nvme0n1 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDBhMTkxMTRjNDJmY2I1NDE5NTViYzFlMTkwZmI1ZTE4OTQ5MzdkYzI0YjA2OTNl/SqnVg==: 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWI3YjkxOTY3NWI0N2I4Nzk5ZmU5ODY5MTI0MzJkZDbyj8Z6: 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.087 06:03:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.653 nvme0n1 00:34:40.653 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.653 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.653 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.653 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.653 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.653 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWY4ZjQ1M2YzMzQ3ZDg2YjIwNTIwMDQ2ZjFiYjJlOGE4YmUzOGQwODFmZjE2OGMwMTQ5NTg4OWRjMmI0OTU5NTCe35w=: 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.911 06:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.477 nvme0n1 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.477 request: 00:34:41.477 { 00:34:41.477 "name": "nvme0", 00:34:41.477 "trtype": "tcp", 00:34:41.477 "traddr": "10.0.0.1", 00:34:41.477 "adrfam": "ipv4", 00:34:41.477 "trsvcid": "4420", 00:34:41.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:41.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:41.477 "prchk_reftag": false, 00:34:41.477 "prchk_guard": false, 00:34:41.477 "hdgst": false, 00:34:41.477 "ddgst": false, 00:34:41.477 "allow_unrecognized_csi": false, 00:34:41.477 "method": "bdev_nvme_attach_controller", 00:34:41.477 "req_id": 1 00:34:41.477 } 00:34:41.477 Got JSON-RPC error response 00:34:41.477 response: 00:34:41.477 { 00:34:41.477 "code": -5, 00:34:41.477 "message": "Input/output error" 00:34:41.477 } 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:41.477 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.478 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.735 request: 00:34:41.735 { 00:34:41.735 "name": "nvme0", 00:34:41.735 "trtype": "tcp", 00:34:41.735 "traddr": "10.0.0.1", 00:34:41.735 "adrfam": "ipv4", 00:34:41.735 "trsvcid": "4420", 00:34:41.735 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:41.735 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:41.735 "prchk_reftag": false, 00:34:41.735 "prchk_guard": false, 00:34:41.735 "hdgst": false, 00:34:41.735 "ddgst": false, 00:34:41.735 "dhchap_key": "key2", 00:34:41.735 "allow_unrecognized_csi": false, 00:34:41.735 "method": "bdev_nvme_attach_controller", 00:34:41.735 "req_id": 1 00:34:41.735 } 00:34:41.735 Got JSON-RPC error response 00:34:41.736 response: 00:34:41.736 { 00:34:41.736 "code": -5, 00:34:41.736 "message": "Input/output error" 00:34:41.736 } 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.736 request: 00:34:41.736 { 00:34:41.736 "name": "nvme0", 00:34:41.736 "trtype": "tcp", 00:34:41.736 "traddr": "10.0.0.1", 00:34:41.736 "adrfam": "ipv4", 00:34:41.736 "trsvcid": "4420", 00:34:41.736 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:41.736 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:41.736 "prchk_reftag": false, 00:34:41.736 "prchk_guard": false, 00:34:41.736 "hdgst": false, 00:34:41.736 "ddgst": false, 00:34:41.736 "dhchap_key": "key1", 00:34:41.736 "dhchap_ctrlr_key": "ckey2", 00:34:41.736 "allow_unrecognized_csi": false, 00:34:41.736 "method": "bdev_nvme_attach_controller", 00:34:41.736 "req_id": 1 00:34:41.736 } 00:34:41.736 Got JSON-RPC error response 00:34:41.736 response: 00:34:41.736 { 00:34:41.736 "code": -5, 00:34:41.736 "message": "Input/output error" 00:34:41.736 } 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.736 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.993 nvme0n1 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.993 request: 00:34:41.993 { 00:34:41.993 "name": "nvme0", 00:34:41.993 "dhchap_key": "key1", 00:34:41.993 "dhchap_ctrlr_key": "ckey2", 00:34:41.993 "method": "bdev_nvme_set_keys", 00:34:41.993 "req_id": 1 00:34:41.993 } 00:34:41.993 Got JSON-RPC error response 00:34:41.993 response: 00:34:41.993 { 00:34:41.993 "code": -13, 00:34:41.993 "message": "Permission denied" 00:34:41.993 } 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:41.993 06:03:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:43.366 06:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.366 06:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:43.366 06:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.366 06:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.366 06:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.366 06:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:43.366 06:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2UxMmJkMmJiODYwNTQzZmViN2NjYTg5YTQxMjg1MDdjMjhlOGRkOGM2NmNjZmQ2p2FYFQ==: 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: ]] 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTgwZjBjNGQyNGMwMWU3M2U0MWQ0YjA2OGQ3NjM5MTUyNDYwNzlhZTc2NTlkNWYxWdMrdQ==: 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.300 06:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.300 nvme0n1 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWE1NzM5YThkNmQ2NDNjMWVkZDFiZDYxZjZhOWE2ZjKanyGL: 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: ]] 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQxNGY4YTYyNDk2MThmNTNjNTUzYTU4NDZlNmUyYzEPZJix: 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.300 request: 00:34:44.300 { 00:34:44.300 "name": "nvme0", 00:34:44.300 "dhchap_key": "key2", 00:34:44.300 "dhchap_ctrlr_key": "ckey1", 00:34:44.300 "method": "bdev_nvme_set_keys", 00:34:44.300 "req_id": 1 00:34:44.300 } 00:34:44.300 Got JSON-RPC error response 00:34:44.300 response: 00:34:44.300 { 00:34:44.300 "code": -13, 00:34:44.300 "message": "Permission denied" 00:34:44.300 } 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.300 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.557 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:44.557 06:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.488 rmmod nvme_tcp 00:34:45.488 rmmod nvme_fabrics 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 3551118 ']' 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 3551118 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3551118 ']' 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3551118 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3551118 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3551118' 00:34:45.488 killing process with pid 3551118 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3551118 00:34:45.488 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3551118 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.746 06:03:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:34:48.274 06:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:50.176 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:50.176 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:51.110 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:51.110 06:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Zet /tmp/spdk.key-null.90Y /tmp/spdk.key-sha256.4pJ /tmp/spdk.key-sha384.gbO /tmp/spdk.key-sha512.DsJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:51.110 06:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:53.639 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:53.639 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:53.639 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:53.639 00:34:53.639 real 0m52.228s 00:34:53.639 user 0m47.581s 00:34:53.639 sys 0m11.532s 00:34:53.639 06:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:53.639 06:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.639 ************************************ 00:34:53.639 END TEST nvmf_auth_host 00:34:53.639 ************************************ 00:34:53.639 06:03:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:53.639 06:03:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:53.639 06:03:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:53.639 06:03:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:53.639 06:03:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.898 ************************************ 00:34:53.898 START TEST nvmf_digest 00:34:53.898 ************************************ 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:53.898 * Looking for test storage... 00:34:53.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.898 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.899 --rc genhtml_branch_coverage=1 00:34:53.899 --rc genhtml_function_coverage=1 00:34:53.899 --rc genhtml_legend=1 00:34:53.899 --rc geninfo_all_blocks=1 00:34:53.899 --rc geninfo_unexecuted_blocks=1 00:34:53.899 00:34:53.899 ' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.899 --rc genhtml_branch_coverage=1 00:34:53.899 --rc genhtml_function_coverage=1 00:34:53.899 --rc genhtml_legend=1 00:34:53.899 --rc geninfo_all_blocks=1 00:34:53.899 --rc geninfo_unexecuted_blocks=1 00:34:53.899 00:34:53.899 ' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.899 --rc genhtml_branch_coverage=1 00:34:53.899 --rc genhtml_function_coverage=1 00:34:53.899 --rc genhtml_legend=1 00:34:53.899 --rc geninfo_all_blocks=1 00:34:53.899 --rc geninfo_unexecuted_blocks=1 00:34:53.899 00:34:53.899 ' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.899 --rc genhtml_branch_coverage=1 00:34:53.899 --rc genhtml_function_coverage=1 00:34:53.899 --rc genhtml_legend=1 00:34:53.899 --rc geninfo_all_blocks=1 00:34:53.899 --rc geninfo_unexecuted_blocks=1 00:34:53.899 00:34:53.899 ' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.899 06:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.165 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.165 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.165 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.165 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:59.166 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:59.166 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:59.166 Found net devices under 0000:af:00.0: cvl_0_0 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:59.166 Found net devices under 0000:af:00.1: cvl_0_1 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.166 06:03:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.425 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.425 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.425 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.425 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.425 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:34:59.683 00:34:59.683 --- 10.0.0.2 ping statistics --- 00:34:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.683 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:34:59.683 00:34:59.683 --- 10.0.0.1 ping statistics --- 00:34:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.683 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:59.683 ************************************ 00:34:59.683 START TEST nvmf_digest_clean 00:34:59.683 ************************************ 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=3565072 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 3565072 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3565072 ']' 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:59.683 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.683 [2024-12-16 06:03:33.456020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.683 [2024-12-16 06:03:33.456071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.683 [2024-12-16 06:03:33.515786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.942 [2024-12-16 06:03:33.557304] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.942 [2024-12-16 06:03:33.557341] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.942 [2024-12-16 06:03:33.557348] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.942 [2024-12-16 06:03:33.557354] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.942 [2024-12-16 06:03:33.557359] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.942 [2024-12-16 06:03:33.557399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.942 null0 00:34:59.942 [2024-12-16 06:03:33.710095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.942 [2024-12-16 06:03:33.734294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3565222 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3565222 /var/tmp/bperf.sock 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3565222 ']' 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:59.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:59.942 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.942 [2024-12-16 06:03:33.787281] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:59.942 [2024-12-16 06:03:33.787321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565222 ] 00:35:00.200 [2024-12-16 06:03:33.842745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.200 [2024-12-16 06:03:33.882214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.200 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:00.200 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:00.200 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:00.200 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:00.200 06:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:00.458 06:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.458 06:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.716 nvme0n1 00:35:00.716 06:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:00.716 06:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.973 Running I/O for 2 seconds... 00:35:02.839 25720.00 IOPS, 100.47 MiB/s [2024-12-16T05:03:36.695Z] 26117.50 IOPS, 102.02 MiB/s 00:35:02.839 Latency(us) 00:35:02.839 [2024-12-16T05:03:36.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.839 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:02.839 nvme0n1 : 2.00 26127.25 102.06 0.00 0.00 4894.03 2309.36 12795.12 00:35:02.839 [2024-12-16T05:03:36.695Z] =================================================================================================================== 00:35:02.839 [2024-12-16T05:03:36.695Z] Total : 26127.25 102.06 0.00 0.00 4894.03 2309.36 12795.12 00:35:02.839 { 00:35:02.839 "results": [ 00:35:02.839 { 00:35:02.839 "job": "nvme0n1", 00:35:02.839 "core_mask": "0x2", 00:35:02.839 "workload": "randread", 00:35:02.839 "status": "finished", 00:35:02.839 "queue_depth": 128, 00:35:02.839 "io_size": 4096, 00:35:02.839 "runtime": 2.004153, 00:35:02.839 "iops": 26127.246772077782, 00:35:02.839 "mibps": 102.05955770342884, 00:35:02.839 "io_failed": 0, 00:35:02.839 "io_timeout": 0, 00:35:02.839 "avg_latency_us": 4894.033554409101, 00:35:02.839 "min_latency_us": 2309.3638095238093, 00:35:02.839 "max_latency_us": 12795.12380952381 00:35:02.839 } 00:35:02.839 ], 00:35:02.839 "core_count": 1 00:35:02.839 } 00:35:02.839 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:02.839 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:02.839 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:02.839 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:02.839 | select(.opcode=="crc32c") 00:35:02.839 | "\(.module_name) \(.executed)"' 00:35:02.839 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3565222 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3565222 ']' 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3565222 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3565222 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:03.097 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3565222' 00:35:03.097 killing process with pid 3565222 00:35:03.098 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3565222 00:35:03.098 Received shutdown signal, test time was about 2.000000 seconds 00:35:03.098 00:35:03.098 Latency(us) 00:35:03.098 [2024-12-16T05:03:36.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.098 [2024-12-16T05:03:36.954Z] =================================================================================================================== 00:35:03.098 [2024-12-16T05:03:36.954Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.098 06:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3565222 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3565761 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3565761 /var/tmp/bperf.sock 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3565761 ']' 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.364 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.364 [2024-12-16 06:03:37.122018] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:03.364 [2024-12-16 06:03:37.122063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565761 ] 00:35:03.364 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:03.364 Zero copy mechanism will not be used. 00:35:03.364 [2024-12-16 06:03:37.176788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.624 [2024-12-16 06:03:37.217249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.624 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:03.624 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:03.624 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:03.624 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:03.624 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:03.881 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.881 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.139 nvme0n1 00:35:04.139 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:04.139 06:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.397 Zero copy mechanism will not be used. 00:35:04.397 Running I/O for 2 seconds... 00:35:06.432 5582.00 IOPS, 697.75 MiB/s [2024-12-16T05:03:40.288Z] 5691.00 IOPS, 711.38 MiB/s 00:35:06.432 Latency(us) 00:35:06.432 [2024-12-16T05:03:40.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.432 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:06.432 nvme0n1 : 2.00 5689.11 711.14 0.00 0.00 2809.64 643.66 7240.17 00:35:06.432 [2024-12-16T05:03:40.288Z] =================================================================================================================== 00:35:06.432 [2024-12-16T05:03:40.288Z] Total : 5689.11 711.14 0.00 0.00 2809.64 643.66 7240.17 00:35:06.432 { 00:35:06.432 "results": [ 00:35:06.432 { 00:35:06.432 "job": "nvme0n1", 00:35:06.432 "core_mask": "0x2", 00:35:06.432 "workload": "randread", 00:35:06.432 "status": "finished", 00:35:06.432 "queue_depth": 16, 00:35:06.432 "io_size": 131072, 00:35:06.432 "runtime": 2.003478, 00:35:06.432 "iops": 5689.106643546872, 00:35:06.432 "mibps": 711.138330443359, 00:35:06.433 "io_failed": 0, 00:35:06.433 "io_timeout": 0, 00:35:06.433 "avg_latency_us": 2809.640075201163, 00:35:06.433 "min_latency_us": 643.6571428571428, 00:35:06.433 "max_latency_us": 7240.167619047619 00:35:06.433 } 00:35:06.433 ], 00:35:06.433 "core_count": 1 00:35:06.433 } 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:06.433 | select(.opcode=="crc32c") 00:35:06.433 | "\(.module_name) \(.executed)"' 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3565761 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3565761 ']' 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3565761 00:35:06.433 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3565761 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3565761' 00:35:06.691 killing process with pid 3565761 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3565761 00:35:06.691 Received shutdown signal, test time was about 2.000000 seconds 00:35:06.691 00:35:06.691 Latency(us) 00:35:06.691 [2024-12-16T05:03:40.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.691 [2024-12-16T05:03:40.547Z] =================================================================================================================== 00:35:06.691 [2024-12-16T05:03:40.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3565761 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3566230 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3566230 /var/tmp/bperf.sock 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3566230 ']' 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:06.691 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:06.949 [2024-12-16 06:03:40.558635] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:06.949 [2024-12-16 06:03:40.558688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3566230 ] 00:35:06.949 [2024-12-16 06:03:40.616185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.949 [2024-12-16 06:03:40.652991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.949 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:06.949 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:06.949 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:06.949 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:06.949 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:07.207 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.207 06:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.772 nvme0n1 00:35:07.773 06:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:07.773 06:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.773 Running I/O for 2 seconds... 00:35:09.639 27190.00 IOPS, 106.21 MiB/s [2024-12-16T05:03:43.495Z] 27327.00 IOPS, 106.75 MiB/s 00:35:09.639 Latency(us) 00:35:09.639 [2024-12-16T05:03:43.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.639 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:09.639 nvme0n1 : 2.00 27329.06 106.75 0.00 0.00 4676.09 3526.46 13294.45 00:35:09.639 [2024-12-16T05:03:43.495Z] =================================================================================================================== 00:35:09.639 [2024-12-16T05:03:43.495Z] Total : 27329.06 106.75 0.00 0.00 4676.09 3526.46 13294.45 00:35:09.639 { 00:35:09.639 "results": [ 00:35:09.639 { 00:35:09.639 "job": "nvme0n1", 00:35:09.639 "core_mask": "0x2", 00:35:09.639 "workload": "randwrite", 00:35:09.639 "status": "finished", 00:35:09.639 "queue_depth": 128, 00:35:09.639 "io_size": 4096, 00:35:09.639 "runtime": 2.004533, 00:35:09.639 "iops": 27329.058688482553, 00:35:09.639 "mibps": 106.75413550188497, 00:35:09.639 "io_failed": 0, 00:35:09.639 "io_timeout": 0, 00:35:09.639 "avg_latency_us": 4676.092169864624, 00:35:09.639 "min_latency_us": 3526.460952380952, 00:35:09.639 "max_latency_us": 13294.445714285714 00:35:09.639 } 00:35:09.639 ], 00:35:09.639 "core_count": 1 00:35:09.639 } 00:35:09.639 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:09.639 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:09.639 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:09.639 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:09.639 | select(.opcode=="crc32c") 00:35:09.639 | "\(.module_name) \(.executed)"' 00:35:09.639 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3566230 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3566230 ']' 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3566230 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3566230 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3566230' 00:35:09.897 killing process with pid 3566230 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3566230 00:35:09.897 Received shutdown signal, test time was about 2.000000 seconds 00:35:09.897 00:35:09.897 Latency(us) 00:35:09.897 [2024-12-16T05:03:43.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.897 [2024-12-16T05:03:43.753Z] =================================================================================================================== 00:35:09.897 [2024-12-16T05:03:43.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:09.897 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3566230 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3566903 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3566903 /var/tmp/bperf.sock 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3566903 ']' 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:10.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:10.155 06:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.155 [2024-12-16 06:03:43.959185] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:10.155 [2024-12-16 06:03:43.959230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3566903 ] 00:35:10.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:10.155 Zero copy mechanism will not be used. 00:35:10.413 [2024-12-16 06:03:44.014183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.413 [2024-12-16 06:03:44.054025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.413 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:10.413 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:10.413 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:10.413 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:10.414 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:10.671 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:10.671 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:10.929 nvme0n1 00:35:10.929 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:10.929 06:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:10.929 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:10.929 Zero copy mechanism will not be used. 00:35:10.929 Running I/O for 2 seconds... 00:35:13.235 6440.00 IOPS, 805.00 MiB/s [2024-12-16T05:03:47.091Z] 6747.00 IOPS, 843.38 MiB/s 00:35:13.235 Latency(us) 00:35:13.235 [2024-12-16T05:03:47.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.235 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:13.235 nvme0n1 : 2.00 6744.33 843.04 0.00 0.00 2368.37 1856.85 6210.32 00:35:13.235 [2024-12-16T05:03:47.091Z] =================================================================================================================== 00:35:13.235 [2024-12-16T05:03:47.091Z] Total : 6744.33 843.04 0.00 0.00 2368.37 1856.85 6210.32 00:35:13.235 { 00:35:13.235 "results": [ 00:35:13.235 { 00:35:13.235 "job": "nvme0n1", 00:35:13.235 "core_mask": "0x2", 00:35:13.235 "workload": "randwrite", 00:35:13.235 "status": "finished", 00:35:13.235 "queue_depth": 16, 00:35:13.235 "io_size": 131072, 00:35:13.235 "runtime": 2.003163, 00:35:13.235 "iops": 6744.333836038305, 00:35:13.235 "mibps": 843.0417295047881, 00:35:13.235 "io_failed": 0, 00:35:13.235 "io_timeout": 0, 00:35:13.235 "avg_latency_us": 2368.3675189454016, 00:35:13.235 "min_latency_us": 1856.8533333333332, 00:35:13.235 "max_latency_us": 6210.31619047619 00:35:13.235 } 00:35:13.235 ], 00:35:13.235 "core_count": 1 00:35:13.235 } 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:13.235 | select(.opcode=="crc32c") 00:35:13.235 | "\(.module_name) \(.executed)"' 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3566903 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3566903 ']' 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3566903 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:13.235 06:03:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3566903 00:35:13.235 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:13.235 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:13.235 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3566903' 00:35:13.235 killing process with pid 3566903 00:35:13.235 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3566903 00:35:13.235 Received shutdown signal, test time was about 2.000000 seconds 00:35:13.235 00:35:13.235 Latency(us) 00:35:13.235 [2024-12-16T05:03:47.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.235 [2024-12-16T05:03:47.091Z] =================================================================================================================== 00:35:13.235 [2024-12-16T05:03:47.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:13.235 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3566903 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3565072 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3565072 ']' 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3565072 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3565072 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3565072' 00:35:13.494 killing process with pid 3565072 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3565072 00:35:13.494 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3565072 00:35:13.752 00:35:13.752 real 0m14.041s 00:35:13.752 user 0m26.861s 00:35:13.752 sys 0m4.481s 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:13.752 ************************************ 00:35:13.752 END TEST nvmf_digest_clean 00:35:13.752 ************************************ 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:13.752 ************************************ 00:35:13.752 START TEST nvmf_digest_error 00:35:13.752 ************************************ 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=3567387 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 3567387 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3567387 ']' 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:13.752 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:13.752 [2024-12-16 06:03:47.556937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:13.752 [2024-12-16 06:03:47.556982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.752 [2024-12-16 06:03:47.606190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.011 [2024-12-16 06:03:47.643448] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.011 [2024-12-16 06:03:47.643486] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.011 [2024-12-16 06:03:47.643493] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.011 [2024-12-16 06:03:47.643502] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.011 [2024-12-16 06:03:47.643507] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.011 [2024-12-16 06:03:47.643528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.011 [2024-12-16 06:03:47.748040] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.011 null0 00:35:14.011 [2024-12-16 06:03:47.837195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.011 [2024-12-16 06:03:47.861397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.011 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3567507 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3567507 /var/tmp/bperf.sock 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3567507 ']' 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:14.269 06:03:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.269 [2024-12-16 06:03:47.913275] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:14.269 [2024-12-16 06:03:47.913314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3567507 ] 00:35:14.269 [2024-12-16 06:03:47.968091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.269 [2024-12-16 06:03:48.007727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.269 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:14.269 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:14.269 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:14.269 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:14.527 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:14.527 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.527 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.527 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.527 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:14.527 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.091 nvme0n1 00:35:15.091 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:15.091 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.091 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.091 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.091 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.091 06:03:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.091 Running I/O for 2 seconds... 00:35:15.091 [2024-12-16 06:03:48.803342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.803372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.803384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.814788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.814812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.814821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.824382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.824406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.824414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.833272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.833292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.833300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.844403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.844423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.844432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.854246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.854265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.854273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.863412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.863431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.863439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.872607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.872627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.872635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.881968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.881989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.881997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.891043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.891064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.891072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.900134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.900154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.900163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.909217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.909237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.909245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.919013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.919033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.919041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.926898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.926918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.926927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.936953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.936972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.936980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.092 [2024-12-16 06:03:48.945853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.092 [2024-12-16 06:03:48.945872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.092 [2024-12-16 06:03:48.945881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.350 [2024-12-16 06:03:48.955657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.350 [2024-12-16 06:03:48.955677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.350 [2024-12-16 06:03:48.955685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.350 [2024-12-16 06:03:48.965329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.350 [2024-12-16 06:03:48.965349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.350 [2024-12-16 06:03:48.965356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.350 [2024-12-16 06:03:48.976247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.350 [2024-12-16 06:03:48.976266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:48.976274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:48.985167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:48.985186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:48.985198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:48.996017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:48.996036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:48.996044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.005715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.005735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.005743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.015805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.015824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.015832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.024987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.025006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.025014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.034091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.034110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.034117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.042631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.042650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.042659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.051624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.051644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.051652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.060838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.060863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.060871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.070027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.070046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.070054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.079725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.079744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.079753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.089442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.089461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.089469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.098262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.098282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.098290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.107001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.107022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.107030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.118539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.118561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.118569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.128447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.128468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.128476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.136434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.136454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.136462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.147392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.147412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.147424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.156581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.156601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.156609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.165717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.165737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.165745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.174323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.351 [2024-12-16 06:03:49.174342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.351 [2024-12-16 06:03:49.174350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.351 [2024-12-16 06:03:49.185627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.352 [2024-12-16 06:03:49.185647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.352 [2024-12-16 06:03:49.185655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.352 [2024-12-16 06:03:49.193845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.352 [2024-12-16 06:03:49.193871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.352 [2024-12-16 06:03:49.193879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.205567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.205587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.205595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.216920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.216947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.225978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.225997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.226005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.238077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.238102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.238110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.247680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.247699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.247707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.256051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.256071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.256079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.265731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.265756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.265764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.275172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.275192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.275200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.284389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.284409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.284416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.293785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.293805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.293813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.303739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.303759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.303767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.313491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.313510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.313518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.322086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.322106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.322113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.332961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.332981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.332989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.341405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.341426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.341434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.354148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.354168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.354176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.365056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.365075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.365083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.373514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.373534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.373541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.384430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.610 [2024-12-16 06:03:49.384450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.610 [2024-12-16 06:03:49.384458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.610 [2024-12-16 06:03:49.392383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.611 [2024-12-16 06:03:49.392402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.611 [2024-12-16 06:03:49.392410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.611 [2024-12-16 06:03:49.403593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.611 [2024-12-16 06:03:49.403613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.611 [2024-12-16 06:03:49.403624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.611 [2024-12-16 06:03:49.416200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.611 [2024-12-16 06:03:49.416220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.611 [2024-12-16 06:03:49.416228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.611 [2024-12-16 06:03:49.426895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.611 [2024-12-16 06:03:49.426916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.611 [2024-12-16 06:03:49.426924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.611 [2024-12-16 06:03:49.436013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.611 [2024-12-16 06:03:49.436034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.611 [2024-12-16 06:03:49.436041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.611 [2024-12-16 06:03:49.446447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.611 [2024-12-16 06:03:49.446468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.611 [2024-12-16 06:03:49.446476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.611 [2024-12-16 06:03:49.457417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.611 [2024-12-16 06:03:49.457436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.611 [2024-12-16 06:03:49.457444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.465796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.465816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.465823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.477071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.477091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.477099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.488620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.488640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.488648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.497596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.497615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.497624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.509323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.509352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.520295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.520314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.520322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.528573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.528592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.528600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.538520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.538538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.538546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.549541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.549559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.549567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.558262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.558281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.558289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.569343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.569363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.569370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.580646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.580665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.580677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.591245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.591264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.591271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.600479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.600498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.600506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.608919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.608938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.608946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.618298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.618317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.618325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.869 [2024-12-16 06:03:49.628417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.869 [2024-12-16 06:03:49.628436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.869 [2024-12-16 06:03:49.628444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.637497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.637516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.637524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.646237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.646256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.646264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.656357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.656376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.656383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.666400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.666422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.666430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.674983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.675002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.675010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.687511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.687530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.687538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.698920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.698939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.698947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.707673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.707691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.707699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.870 [2024-12-16 06:03:49.718672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:15.870 [2024-12-16 06:03:49.718692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.870 [2024-12-16 06:03:49.718700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.128 [2024-12-16 06:03:49.729948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.128 [2024-12-16 06:03:49.729967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.128 [2024-12-16 06:03:49.729975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.128 [2024-12-16 06:03:49.738069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.128 [2024-12-16 06:03:49.738088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.128 [2024-12-16 06:03:49.738096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.128 [2024-12-16 06:03:49.746925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.128 [2024-12-16 06:03:49.746945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.128 [2024-12-16 06:03:49.746953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.128 [2024-12-16 06:03:49.757864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.128 [2024-12-16 06:03:49.757883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.128 [2024-12-16 06:03:49.757891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.128 [2024-12-16 06:03:49.766994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.128 [2024-12-16 06:03:49.767013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.128 [2024-12-16 06:03:49.767020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.128 [2024-12-16 06:03:49.775209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.128 [2024-12-16 06:03:49.775228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.128 [2024-12-16 06:03:49.775236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.784433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.784451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.784459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 25950.00 IOPS, 101.37 MiB/s [2024-12-16T05:03:49.985Z] [2024-12-16 06:03:49.794996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.795015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.795023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.804938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.804957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.804965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.813775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.813795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.813803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.823686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.823705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.823713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.832397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.832415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.832427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.841295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.841315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.841323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.851658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.851677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.851685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.859682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.859701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.859708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.869869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.869889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.869897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.880212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.880231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.880239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.888324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.888343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.888350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.900093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.900113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.900121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.910305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.910325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.910333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.919034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.919054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.919062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.928373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.928394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.928402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.937103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.937122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.937130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.947193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.947212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.947220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.955905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.955924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.955932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.964121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.964139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.964147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.129 [2024-12-16 06:03:49.975293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.129 [2024-12-16 06:03:49.975313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.129 [2024-12-16 06:03:49.975320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:49.988103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:49.988123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:49.988131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:49.998677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:49.998696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:49.998707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.007790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.007810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.007818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.020267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.020287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.020296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.028905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.028925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.028933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.040361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.040381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.040389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.050387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.050407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.059778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.059797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.059805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.068484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.068503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.068511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.077162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.077180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.077188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.086881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.086905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.086913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.097473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.097492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.097500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.107327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.107347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.107356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.117384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.117404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.117413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.125615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.125635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.125643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.134652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.134671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.134679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.145824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.145844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.145858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.156686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.388 [2024-12-16 06:03:50.156706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.388 [2024-12-16 06:03:50.156713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.388 [2024-12-16 06:03:50.166231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.166251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.166259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.389 [2024-12-16 06:03:50.177485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.177504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.177512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.389 [2024-12-16 06:03:50.187760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.187780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.187788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.389 [2024-12-16 06:03:50.195636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.195655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.195664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.389 [2024-12-16 06:03:50.207319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.207339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.207347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.389 [2024-12-16 06:03:50.218825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.218845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.218860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.389 [2024-12-16 06:03:50.229768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.229788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.229796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.389 [2024-12-16 06:03:50.238348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.389 [2024-12-16 06:03:50.238369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.389 [2024-12-16 06:03:50.238380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.250061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.250081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.250089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.259998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.260019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.260031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.269514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.269533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.269541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.278075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.278094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.278101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.289890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.289911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.289919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.302444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.302465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.302473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.314717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.314737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.314745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.322806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.322826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.322834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.334431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.334451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.334459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.346498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.346518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.346526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.354070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.354093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.354101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.366020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.366040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.366048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.375203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.375224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.375232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.385444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.385464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.385472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.393892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.647 [2024-12-16 06:03:50.393911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-12-16 06:03:50.393919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.647 [2024-12-16 06:03:50.404938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.404958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.404965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.414426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.414445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.414453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.424129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.424148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.424156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.434127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.434146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.434157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.443828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.443851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.443860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.452985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.453005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.453013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.462618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.462638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.462646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.472056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.472075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.472083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.481634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.481653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.481661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.490310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.490329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.490337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.648 [2024-12-16 06:03:50.499934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.648 [2024-12-16 06:03:50.499953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.648 [2024-12-16 06:03:50.499962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.509647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.509666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.509674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.519045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.519072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.519080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.528349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.528371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.528379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.537095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.537116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.537123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.549252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.549273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.549281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.562172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.562195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.562203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.574815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.574836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.574844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.585929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.585949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.585957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.594832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.594857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.594865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.606057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.606077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.606085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.616732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.906 [2024-12-16 06:03:50.616752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.906 [2024-12-16 06:03:50.616760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.906 [2024-12-16 06:03:50.628408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.628428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.628437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.639871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.639891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.639899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.648254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.648275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.648283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.660384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.660404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.660412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.668399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.668419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.668428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.680786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.680806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.680814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.691723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.691744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.691751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.700487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.700508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.700520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.712183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.712203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.712211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.723188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.723208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.723216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.734451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.734471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.734478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.743696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.743716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.743724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.907 [2024-12-16 06:03:50.754868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:16.907 [2024-12-16 06:03:50.754889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-12-16 06:03:50.754897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.165 [2024-12-16 06:03:50.765449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:17.165 [2024-12-16 06:03:50.765469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.165 [2024-12-16 06:03:50.765477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.165 [2024-12-16 06:03:50.773908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:17.165 [2024-12-16 06:03:50.773932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.165 [2024-12-16 06:03:50.773940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.165 [2024-12-16 06:03:50.784974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:17.165 [2024-12-16 06:03:50.784994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.165 [2024-12-16 06:03:50.785002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.165 [2024-12-16 06:03:50.793421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ec0810) 00:35:17.165 [2024-12-16 06:03:50.793445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.165 [2024-12-16 06:03:50.793453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.165 25631.50 IOPS, 100.12 MiB/s 00:35:17.165 Latency(us) 00:35:17.165 [2024-12-16T05:03:51.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.165 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:17.165 nvme0n1 : 2.04 25139.11 98.20 0.00 0.00 4987.24 2605.84 48683.89 00:35:17.165 [2024-12-16T05:03:51.021Z] =================================================================================================================== 00:35:17.165 [2024-12-16T05:03:51.021Z] Total : 25139.11 98.20 0.00 0.00 4987.24 2605.84 48683.89 00:35:17.165 { 00:35:17.165 "results": [ 00:35:17.165 { 00:35:17.165 "job": "nvme0n1", 00:35:17.165 "core_mask": "0x2", 00:35:17.165 "workload": "randread", 00:35:17.165 "status": "finished", 00:35:17.165 "queue_depth": 128, 00:35:17.165 "io_size": 4096, 00:35:17.165 "runtime": 2.044265, 00:35:17.165 "iops": 25139.10867720183, 00:35:17.165 "mibps": 98.19964327031965, 00:35:17.165 "io_failed": 0, 00:35:17.165 "io_timeout": 0, 00:35:17.165 "avg_latency_us": 4987.239221838918, 00:35:17.165 "min_latency_us": 2605.8361904761905, 00:35:17.165 "max_latency_us": 48683.885714285716 00:35:17.165 } 00:35:17.165 ], 00:35:17.165 "core_count": 1 00:35:17.165 } 00:35:17.165 06:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.165 06:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.165 06:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.165 | .driver_specific 00:35:17.165 | .nvme_error 00:35:17.165 | .status_code 00:35:17.165 | .command_transient_transport_error' 00:35:17.165 06:03:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:17.423 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:35:17.423 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3567507 00:35:17.423 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3567507 ']' 00:35:17.423 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3567507 00:35:17.423 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:17.423 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:17.423 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3567507 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3567507' 00:35:17.424 killing process with pid 3567507 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3567507 00:35:17.424 Received shutdown signal, test time was about 2.000000 seconds 00:35:17.424 00:35:17.424 Latency(us) 00:35:17.424 [2024-12-16T05:03:51.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.424 [2024-12-16T05:03:51.280Z] =================================================================================================================== 00:35:17.424 [2024-12-16T05:03:51.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3567507 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3568085 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3568085 /var/tmp/bperf.sock 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:17.424 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3568085 ']' 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:17.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.682 [2024-12-16 06:03:51.322479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:17.682 [2024-12-16 06:03:51.322525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568085 ] 00:35:17.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:17.682 Zero copy mechanism will not be used. 00:35:17.682 [2024-12-16 06:03:51.377281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.682 [2024-12-16 06:03:51.414939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:17.682 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:17.939 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:17.939 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.939 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:17.940 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.940 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.940 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.197 nvme0n1 00:35:18.197 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:18.197 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.197 06:03:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.197 06:03:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.197 06:03:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:18.197 06:03:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:18.456 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:18.456 Zero copy mechanism will not be used. 00:35:18.456 Running I/O for 2 seconds... 00:35:18.456 [2024-12-16 06:03:52.104096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.104127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.104137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.109909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.109932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.109941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.115894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.115915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.115923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.121839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.121865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.121874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.127658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.127679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.127687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.133457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.133477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.133485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.139030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.139050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.139061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.144702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.144724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.144731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.150295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.150316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.150324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.156109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.156129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.156137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.161784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.161805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.161813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.167498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.167522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.167530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.173164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.173185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.173192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.178825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.178845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.178860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.184367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.184387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.184395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.189945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.189968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.189976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.195515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.195536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.195543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.200995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.456 [2024-12-16 06:03:52.201016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.456 [2024-12-16 06:03:52.201024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.456 [2024-12-16 06:03:52.206344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.206363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.206371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.211569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.211589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.211597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.217091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.217111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.217118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.222610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.222631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.222638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.228171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.228191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.228198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.233786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.233806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.233814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.239414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.239434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.239442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.244833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.244859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.244867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.250257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.250276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.250284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.255742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.255763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.255770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.261177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.261199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.261207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.266774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.266795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.266805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.272278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.272299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.272307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.277724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.277745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.283434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.283455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.283466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.289071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.289092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.289100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.294796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.294816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.294824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.300541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.300563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.300571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.457 [2024-12-16 06:03:52.305866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.457 [2024-12-16 06:03:52.305886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.457 [2024-12-16 06:03:52.305894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.311097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.311118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.311126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.316337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.316358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.316366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.321620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.321640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.321647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.327437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.327458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.327466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.332741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.332766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.332774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.338861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.338881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.338890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.343604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.343624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.343631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.348955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.348975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.348983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.354313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.354333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.354341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.359776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.359797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.359805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.365356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.365388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.365396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.370698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.370717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.370725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.375877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.375896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.375909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.381053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.381073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.381081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.386253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.386272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.386280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.391571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.391590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.391597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.396693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.396712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.396720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.401965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.401985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.716 [2024-12-16 06:03:52.401992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.716 [2024-12-16 06:03:52.407459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.716 [2024-12-16 06:03:52.407479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.407486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.412855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.412890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.412897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.418289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.418308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.418316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.423558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.423580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.423588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.428928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.428948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.428955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.434123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.434142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.434149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.439114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.439134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.439141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.444374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.444394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.444401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.449533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.449552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.449560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.454731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.454750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.454758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.460048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.460068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.460076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.465828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.465852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.465861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.471328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.471348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.471355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.476709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.476736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.482205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.482224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.482231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.487855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.487876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.487883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.493002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.493021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.493029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.498263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.498283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.498290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.503623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.503643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.503651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.508787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.508806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.508813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.514645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.514665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.514675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.519333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.519352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.519360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.524603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.524622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.524630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.529921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.529940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.529948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.535037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.535057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.535064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.540377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.540396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.540403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.545556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.717 [2024-12-16 06:03:52.545575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.717 [2024-12-16 06:03:52.545583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.717 [2024-12-16 06:03:52.550875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.718 [2024-12-16 06:03:52.550894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.718 [2024-12-16 06:03:52.550902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.718 [2024-12-16 06:03:52.557252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.718 [2024-12-16 06:03:52.557273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.718 [2024-12-16 06:03:52.557280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.718 [2024-12-16 06:03:52.562784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.718 [2024-12-16 06:03:52.562806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.718 [2024-12-16 06:03:52.562814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.718 [2024-12-16 06:03:52.568659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.718 [2024-12-16 06:03:52.568679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.718 [2024-12-16 06:03:52.568687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.575363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.575384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.575391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.582769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.582791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.582799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.589814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.589835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.589843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.597902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.597923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.597931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.606453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.606474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.606482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.613391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.613412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.613422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.619411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.619432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.619440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.625810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.625831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.625839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.631157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.631179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.631186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.638099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.638120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.638128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.645962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.645983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.645991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.652660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.652681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.652689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.659117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.659138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.659145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.664855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.664875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.664883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.670602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.670622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.670629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.676233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.676254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.676265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.681729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.681748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.681755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.687144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.687164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.687172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.692593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.692613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.698130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.698150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.698157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.703574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.703597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.703605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.708790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.708811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.708818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.714044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.714065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.714073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.719225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.976 [2024-12-16 06:03:52.719253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.976 [2024-12-16 06:03:52.724473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.976 [2024-12-16 06:03:52.724494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.724501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.729712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.729732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.729739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.735669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.735690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.735698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.742440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.742461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.742469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.748899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.748919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.748927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.754759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.754780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.754787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.760820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.760842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.760857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.768371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.768392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.768400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.774714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.774735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.774747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.783062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.783083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.783091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.791042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.791063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.791071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.799136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.799158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.799166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.807352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.807374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.807382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.815654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.815676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.815684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.821612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.821632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.821640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.977 [2024-12-16 06:03:52.827357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:18.977 [2024-12-16 06:03:52.827378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.977 [2024-12-16 06:03:52.827386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.235 [2024-12-16 06:03:52.833156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.235 [2024-12-16 06:03:52.833177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.235 [2024-12-16 06:03:52.833184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.235 [2024-12-16 06:03:52.838889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.235 [2024-12-16 06:03:52.838913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.235 [2024-12-16 06:03:52.838921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.235 [2024-12-16 06:03:52.844497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.235 [2024-12-16 06:03:52.844518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.235 [2024-12-16 06:03:52.844525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.235 [2024-12-16 06:03:52.850062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.235 [2024-12-16 06:03:52.850082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.235 [2024-12-16 06:03:52.850090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.235 [2024-12-16 06:03:52.853039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.853059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.853067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.858371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.858391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.858399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.863658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.863679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.863686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.869069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.869089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.869097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.874233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.874253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.874260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.879588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.879608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.879616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.884758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.884779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.884787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.890195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.890215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.890223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.895649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.895670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.895677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.900984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.901006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.901014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.906377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.906398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.906406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.911790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.911811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.911818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.917229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.917250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.917257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.922662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.922682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.922689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.928023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.928043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.928054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.933247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.933268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.933278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.938084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.938105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.938113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.943155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.943174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.943181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.948308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.948328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.948336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.953356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.953376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.953383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.959382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.959403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.959411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.965953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.965975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.965983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.973249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.973270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.973277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.980958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.980982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.980991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.988316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.988338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.988346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:52.996146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:52.996169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:52.996177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.236 [2024-12-16 06:03:53.003750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.236 [2024-12-16 06:03:53.003773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.236 [2024-12-16 06:03:53.003780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.011446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.011468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.011476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.018748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.018769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.018777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.026769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.026790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.026798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.035426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.035447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.035455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.042949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.042970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.042978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.049234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.049255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.049263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.055381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.055401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.055409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.061462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.061483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.061490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.067188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.067208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.067216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.072581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.072601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.072608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.078024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.078045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.078052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.083492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.083513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.083520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.237 [2024-12-16 06:03:53.088933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.237 [2024-12-16 06:03:53.088953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.237 [2024-12-16 06:03:53.088961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.094418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.094438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.094449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.496 5329.00 IOPS, 666.12 MiB/s [2024-12-16T05:03:53.352Z] [2024-12-16 06:03:53.100735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.100756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.100764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.105911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.105933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.105940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.109374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.109394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.109401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.113565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.113586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.113594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.118842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.118869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.118877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.123945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.123965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.123973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.129073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.129093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.129101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.134166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.134186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.134193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.139376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.139403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.139411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.144598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.144617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.144625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.149857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.149877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.149885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.155095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.155116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.155123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.160429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.160450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.160458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.165679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.496 [2024-12-16 06:03:53.165699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.496 [2024-12-16 06:03:53.165707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.496 [2024-12-16 06:03:53.170860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.170881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.170888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.176065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.176085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.176093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.181308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.181328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.181336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.186529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.186549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.186557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.191762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.191783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.191790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.196959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.196979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.196986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.202228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.202248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.202256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.207429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.207449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.207456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.212573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.212593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.212601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.217808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.217827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.217835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.222962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.222983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.222990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.228142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.228161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.228172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.233316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.233335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.233343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.238484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.238504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.238512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.243720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.243740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.243748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.249016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.249036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.249044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.254790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.254811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.254819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.260239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.260267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.265697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.265716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.265723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.271191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.271211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.271218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.276669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.276692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.276700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.282031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.282052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.282059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.287216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.287237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.287245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.292486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.292506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.292513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.297730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.297751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.297759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.302932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.302952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.302960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.308091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.308113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.308120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.497 [2024-12-16 06:03:53.313290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.497 [2024-12-16 06:03:53.313310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.497 [2024-12-16 06:03:53.313318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.498 [2024-12-16 06:03:53.318543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.498 [2024-12-16 06:03:53.318563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.498 [2024-12-16 06:03:53.318570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.498 [2024-12-16 06:03:53.323708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.498 [2024-12-16 06:03:53.323728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.498 [2024-12-16 06:03:53.323736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.498 [2024-12-16 06:03:53.328959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.498 [2024-12-16 06:03:53.328979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.498 [2024-12-16 06:03:53.328987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.498 [2024-12-16 06:03:53.334211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.498 [2024-12-16 06:03:53.334230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.498 [2024-12-16 06:03:53.334238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.498 [2024-12-16 06:03:53.339390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.498 [2024-12-16 06:03:53.339412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.498 [2024-12-16 06:03:53.339420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.498 [2024-12-16 06:03:53.344677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.498 [2024-12-16 06:03:53.344698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.498 [2024-12-16 06:03:53.344706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.498 [2024-12-16 06:03:53.349959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.498 [2024-12-16 06:03:53.349980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.498 [2024-12-16 06:03:53.349989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.355263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.355284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.355292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.360516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.360537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.360545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.366287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.366311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.366320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.372049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.372072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.372080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.377537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.377559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.377566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.382934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.382954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.382962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.388228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.388249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.388257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.393506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.393527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.393534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.398720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.398742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.398750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.403962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.403984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.403991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.409514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.409537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.409545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.415376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.415398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.415407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.422640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.422663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.422671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.429585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.429607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.429615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.436993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.437015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.437024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.442036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.442058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.442065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.447537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.757 [2024-12-16 06:03:53.447558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.757 [2024-12-16 06:03:53.447566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.757 [2024-12-16 06:03:53.452988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.453010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.453018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.458432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.458453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.458460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.463804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.463824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.463836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.469341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.469361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.469369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.474825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.474851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.474860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.480310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.480331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.480338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.485779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.485799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.485807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.491140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.491161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.491169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.496399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.496419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.496427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.501653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.501675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.501683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.506918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.506938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.506946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.512176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.512200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.512208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.517457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.517478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.517486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.522808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.522829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.522837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.528114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.528135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.528144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.533407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.533428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.533435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.538820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.538841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.538856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.544284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.544306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.544315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.550048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.550069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.550077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.556890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.556911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.556919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.563671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.563692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.563700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.570281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.570302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.570310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.577000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.577023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.577031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.583941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.583963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.583971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.591366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.591389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.591397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.599233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.599257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.599264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.758 [2024-12-16 06:03:53.607608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:19.758 [2024-12-16 06:03:53.607630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.758 [2024-12-16 06:03:53.607638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.615292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.615314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.615322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.622337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.622359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.622371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.628180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.628201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.628209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.634214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.634234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.634242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.640181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.640202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.640210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.646030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.646051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.646059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.651812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.651834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.018 [2024-12-16 06:03:53.658164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.018 [2024-12-16 06:03:53.658187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.018 [2024-12-16 06:03:53.658197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.664347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.664371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.664380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.670639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.670662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.670672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.676819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.676852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.676861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.680826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.680856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.680865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.685976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.685998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.686007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.692095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.692118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.692127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.698041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.698062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.698070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.703855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.703876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.703884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.709715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.709737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.716028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.716050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.716058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.721996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.722019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.722026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.727740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.727762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.727770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.733437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.733459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.733467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.739143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.739164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.739171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.744998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.745019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.745026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.750761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.750782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.750790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.756350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.756371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.756378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.762307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.762328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.762335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.767588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.767610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.773291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.773312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.773323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.778876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.778896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.778904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.784454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.784475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.784483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.790583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.790604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.790612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.797378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.797399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.797407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.803828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.803855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.803863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.809556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.809576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.809584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.815122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.815142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.019 [2024-12-16 06:03:53.815150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.019 [2024-12-16 06:03:53.821540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.019 [2024-12-16 06:03:53.821561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.821569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.020 [2024-12-16 06:03:53.827552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.020 [2024-12-16 06:03:53.827576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.827585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.020 [2024-12-16 06:03:53.834211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.020 [2024-12-16 06:03:53.834232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.834240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.020 [2024-12-16 06:03:53.840723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.020 [2024-12-16 06:03:53.840744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.840752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.020 [2024-12-16 06:03:53.847748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.020 [2024-12-16 06:03:53.847770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.847778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.020 [2024-12-16 06:03:53.854696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.020 [2024-12-16 06:03:53.854718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.854726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.020 [2024-12-16 06:03:53.862384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.020 [2024-12-16 06:03:53.862406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.862414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.020 [2024-12-16 06:03:53.869503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.020 [2024-12-16 06:03:53.869525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.020 [2024-12-16 06:03:53.869533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.875148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.875170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.875179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.881021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.881043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.881051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.886714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.886735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.886743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.892191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.892211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.892219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.899011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.899032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.899040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.906593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.906614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.906622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.914329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.914351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.914359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.922140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.922161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.922169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.928414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.928436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.928444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.935276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.935298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.935306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.943223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.943244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.943255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.951401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.951422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.951430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.959744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.959765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.959773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.966963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.279 [2024-12-16 06:03:53.966984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.279 [2024-12-16 06:03:53.966992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.279 [2024-12-16 06:03:53.972965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:53.972986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:53.972994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:53.978786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:53.978807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:53.978814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:53.984103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:53.984123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:53.984131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:53.989549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:53.989570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:53.989577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:53.994750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:53.994771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:53.994779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:53.999998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.000019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.000027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.005239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.005260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.005267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.010476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.010497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.010505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.015830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.015858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.015866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.021575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.021596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.021604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.027207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.027229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.027237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.032745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.032767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.032774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.038394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.038415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.038422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.043890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.043910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.043917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.049307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.049328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.049335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.054881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.054902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.054909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.060346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.060366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.060374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.065808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.065828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.065835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.071429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.071449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.071457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.076874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.076893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.076901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.082415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.082435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.082443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.087873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.087895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.087903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.093634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.093655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.093666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.280 [2024-12-16 06:03:54.099483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37a30) 00:35:20.280 [2024-12-16 06:03:54.099504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.280 [2024-12-16 06:03:54.099512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.280 5354.50 IOPS, 669.31 MiB/s 00:35:20.280 Latency(us) 00:35:20.280 [2024-12-16T05:03:54.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.280 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:20.280 nvme0n1 : 2.00 5354.81 669.35 0.00 0.00 2985.37 639.76 9549.53 00:35:20.280 [2024-12-16T05:03:54.136Z] =================================================================================================================== 00:35:20.280 [2024-12-16T05:03:54.136Z] Total : 5354.81 669.35 0.00 0.00 2985.37 639.76 9549.53 00:35:20.280 { 00:35:20.280 "results": [ 00:35:20.280 { 00:35:20.280 "job": "nvme0n1", 00:35:20.280 "core_mask": "0x2", 00:35:20.280 "workload": "randread", 00:35:20.280 "status": "finished", 00:35:20.280 "queue_depth": 16, 00:35:20.280 "io_size": 131072, 00:35:20.280 "runtime": 2.002871, 00:35:20.280 "iops": 5354.813165700637, 00:35:20.280 "mibps": 669.3516457125796, 00:35:20.280 "io_failed": 0, 00:35:20.280 "io_timeout": 0, 00:35:20.280 "avg_latency_us": 2985.369940992341, 00:35:20.280 "min_latency_us": 639.7561904761905, 00:35:20.281 "max_latency_us": 9549.531428571428 00:35:20.281 } 00:35:20.281 ], 00:35:20.281 "core_count": 1 00:35:20.281 } 00:35:20.281 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:20.281 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:20.281 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:20.281 | .driver_specific 00:35:20.281 | .nvme_error 00:35:20.281 | .status_code 00:35:20.281 | .command_transient_transport_error' 00:35:20.281 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 345 > 0 )) 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3568085 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3568085 ']' 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3568085 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3568085 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3568085' 00:35:20.539 killing process with pid 3568085 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3568085 00:35:20.539 Received shutdown signal, test time was about 2.000000 seconds 00:35:20.539 00:35:20.539 Latency(us) 00:35:20.539 [2024-12-16T05:03:54.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.539 [2024-12-16T05:03:54.395Z] =================================================================================================================== 00:35:20.539 [2024-12-16T05:03:54.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.539 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3568085 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3568544 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3568544 /var/tmp/bperf.sock 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3568544 ']' 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:20.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:20.797 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.797 [2024-12-16 06:03:54.594303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:20.797 [2024-12-16 06:03:54.594354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568544 ] 00:35:20.797 [2024-12-16 06:03:54.649255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.055 [2024-12-16 06:03:54.684889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.055 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:21.055 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:21.055 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:21.055 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:21.313 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:21.313 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.313 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.314 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.314 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.314 06:03:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.571 nvme0n1 00:35:21.571 06:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:21.571 06:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.571 06:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.571 06:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.571 06:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:21.571 06:03:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:21.829 Running I/O for 2 seconds... 00:35:21.829 [2024-12-16 06:03:55.500945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de038 00:35:21.829 [2024-12-16 06:03:55.501538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.501568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.511311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fef90 00:35:21.829 [2024-12-16 06:03:55.512418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.512440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.520577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de8a8 00:35:21.829 [2024-12-16 06:03:55.521674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.521693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.529501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eea00 00:35:21.829 [2024-12-16 06:03:55.530152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.530171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.539487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ebb98 00:35:21.829 [2024-12-16 06:03:55.540817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.540835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.549007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef270 00:35:21.829 [2024-12-16 06:03:55.550453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.550470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.557385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fa3a0 00:35:21.829 [2024-12-16 06:03:55.558391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.558412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.566617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f96f8 00:35:21.829 [2024-12-16 06:03:55.567978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.829 [2024-12-16 06:03:55.567995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.829 [2024-12-16 06:03:55.574097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fc998 00:35:21.829 [2024-12-16 06:03:55.574726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.574743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.584054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ed0b0 00:35:21.830 [2024-12-16 06:03:55.585137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.585155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.593829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaab8 00:35:21.830 [2024-12-16 06:03:55.594918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.594936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.601029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e0630 00:35:21.830 [2024-12-16 06:03:55.601631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.601649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.609843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198df550 00:35:21.830 [2024-12-16 06:03:55.610646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.610663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.621040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fef90 00:35:21.830 [2024-12-16 06:03:55.622432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.622450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.629383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198edd58 00:35:21.830 [2024-12-16 06:03:55.630358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.630376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.637715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198feb58 00:35:21.830 [2024-12-16 06:03:55.638744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.638761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.646116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ff3c8 00:35:21.830 [2024-12-16 06:03:55.646709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.646726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.655341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e4de8 00:35:21.830 [2024-12-16 06:03:55.656160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.656177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.664690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e6fa8 00:35:21.830 [2024-12-16 06:03:55.665255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.665272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.673822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f31b8 00:35:21.830 [2024-12-16 06:03:55.674627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.830 [2024-12-16 06:03:55.674644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:21.830 [2024-12-16 06:03:55.683744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fa3a0 00:35:22.089 [2024-12-16 06:03:55.685030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.685048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.691148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6cc8 00:35:22.089 [2024-12-16 06:03:55.691953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.691970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.702613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e23b8 00:35:22.089 [2024-12-16 06:03:55.704072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.704090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.708954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eee38 00:35:22.089 [2024-12-16 06:03:55.709621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.709639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.718623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e01f8 00:35:22.089 [2024-12-16 06:03:55.719358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.719376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.728029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198feb58 00:35:22.089 [2024-12-16 06:03:55.728924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.728941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.736550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de038 00:35:22.089 [2024-12-16 06:03:55.737411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.737429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.746006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7da8 00:35:22.089 [2024-12-16 06:03:55.747008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.747026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.755457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f3e60 00:35:22.089 [2024-12-16 06:03:55.756535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.756553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.765170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ee5c8 00:35:22.089 [2024-12-16 06:03:55.766394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.766412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.774594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e7818 00:35:22.089 [2024-12-16 06:03:55.775959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.775976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.784090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f96f8 00:35:22.089 [2024-12-16 06:03:55.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.785555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.790460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f4298 00:35:22.089 [2024-12-16 06:03:55.791073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.791093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.799883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e6fa8 00:35:22.089 [2024-12-16 06:03:55.800657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.800675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.808882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e6b70 00:35:22.089 [2024-12-16 06:03:55.809742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.809760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.818356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e1b48 00:35:22.089 [2024-12-16 06:03:55.819344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.819361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.827766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaab8 00:35:22.089 [2024-12-16 06:03:55.828868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.828885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.837206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fc998 00:35:22.089 [2024-12-16 06:03:55.838428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.838445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.846662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f31b8 00:35:22.089 [2024-12-16 06:03:55.848042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.848059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.856161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e88f8 00:35:22.089 [2024-12-16 06:03:55.857589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.857606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.862539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fe2e8 00:35:22.089 [2024-12-16 06:03:55.863194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.863211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.872815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198feb58 00:35:22.089 [2024-12-16 06:03:55.873919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.089 [2024-12-16 06:03:55.873936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.089 [2024-12-16 06:03:55.881155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7538 00:35:22.089 [2024-12-16 06:03:55.881805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.090 [2024-12-16 06:03:55.881822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:22.090 [2024-12-16 06:03:55.889483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6458 00:35:22.090 [2024-12-16 06:03:55.890222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.090 [2024-12-16 06:03:55.890239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:22.090 [2024-12-16 06:03:55.898902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ee5c8 00:35:22.090 [2024-12-16 06:03:55.899745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.090 [2024-12-16 06:03:55.899762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:22.090 [2024-12-16 06:03:55.908311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e12d8 00:35:22.090 [2024-12-16 06:03:55.909278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.090 [2024-12-16 06:03:55.909296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:22.090 [2024-12-16 06:03:55.917768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e9e10 00:35:22.090 [2024-12-16 06:03:55.918865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.090 [2024-12-16 06:03:55.918882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:22.090 [2024-12-16 06:03:55.927180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e49b0 00:35:22.090 [2024-12-16 06:03:55.928384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.090 [2024-12-16 06:03:55.928402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:22.090 [2024-12-16 06:03:55.936625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e1b48 00:35:22.090 [2024-12-16 06:03:55.937942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.090 [2024-12-16 06:03:55.937959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:55.946244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f35f0 00:35:22.349 [2024-12-16 06:03:55.947736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:55.947753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:55.954679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fe2e8 00:35:22.349 [2024-12-16 06:03:55.955684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:55.955702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:55.962954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef6a8 00:35:22.349 [2024-12-16 06:03:55.964242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:55.964259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:55.970677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e27f0 00:35:22.349 [2024-12-16 06:03:55.971406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:55.971423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:55.980666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaab8 00:35:22.349 [2024-12-16 06:03:55.981436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:55.981454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:55.989943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f4298 00:35:22.349 [2024-12-16 06:03:55.990570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:55.990587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:55.999352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de8a8 00:35:22.349 [2024-12-16 06:03:56.000104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.000122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.007853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ed4e8 00:35:22.349 [2024-12-16 06:03:56.009133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.009152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.015840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e8d30 00:35:22.349 [2024-12-16 06:03:56.016561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.016578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.025248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fa7d8 00:35:22.349 [2024-12-16 06:03:56.026071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.026094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.034416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f4298 00:35:22.349 [2024-12-16 06:03:56.035251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.035268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.043703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ebfd0 00:35:22.349 [2024-12-16 06:03:56.044553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.044570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.052103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f0bc0 00:35:22.349 [2024-12-16 06:03:56.052825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.052843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.062240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ed920 00:35:22.349 [2024-12-16 06:03:56.063224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.063241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.071137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ec840 00:35:22.349 [2024-12-16 06:03:56.072021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.072038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.080565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e6300 00:35:22.349 [2024-12-16 06:03:56.081672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.081690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.088717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaef0 00:35:22.349 [2024-12-16 06:03:56.089696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.089714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.098305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fdeb0 00:35:22.349 [2024-12-16 06:03:56.099370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.099388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.107757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f0788 00:35:22.349 [2024-12-16 06:03:56.108955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.108974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.116125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fd208 00:35:22.349 [2024-12-16 06:03:56.116866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.116884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.125275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fb8b8 00:35:22.349 [2024-12-16 06:03:56.125896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.125914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.134746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e8088 00:35:22.349 [2024-12-16 06:03:56.135524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.135542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.145137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f92c0 00:35:22.349 [2024-12-16 06:03:56.146683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.146700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.151398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f5be8 00:35:22.349 [2024-12-16 06:03:56.152044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.152061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.160903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f3a28 00:35:22.349 [2024-12-16 06:03:56.161778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.161795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.169426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e88f8 00:35:22.349 [2024-12-16 06:03:56.170264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.170282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:22.349 [2024-12-16 06:03:56.178828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fb8b8 00:35:22.349 [2024-12-16 06:03:56.179780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.349 [2024-12-16 06:03:56.179798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:22.350 [2024-12-16 06:03:56.188894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e4de8 00:35:22.350 [2024-12-16 06:03:56.189890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.350 [2024-12-16 06:03:56.189907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:22.350 [2024-12-16 06:03:56.197164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f8e88 00:35:22.350 [2024-12-16 06:03:56.198443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.350 [2024-12-16 06:03:56.198460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.205113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6020 00:35:22.608 [2024-12-16 06:03:56.205834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.205855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.214669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f4b08 00:35:22.608 [2024-12-16 06:03:56.215473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.215491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.224070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e38d0 00:35:22.608 [2024-12-16 06:03:56.225013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.225030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.233287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7970 00:35:22.608 [2024-12-16 06:03:56.234225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.234243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.242558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fa3a0 00:35:22.608 [2024-12-16 06:03:56.243510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.243528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.251701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fac10 00:35:22.608 [2024-12-16 06:03:56.252788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.252806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.261449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7da8 00:35:22.608 [2024-12-16 06:03:56.262588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.262610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.269528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5ec8 00:35:22.608 [2024-12-16 06:03:56.270146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.270164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.278493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7100 00:35:22.608 [2024-12-16 06:03:56.279105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.279123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.287593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e9e10 00:35:22.608 [2024-12-16 06:03:56.288191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.608 [2024-12-16 06:03:56.288209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:22.608 [2024-12-16 06:03:56.295840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f4298 00:35:22.608 [2024-12-16 06:03:56.296430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.296448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.305427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f4298 00:35:22.609 [2024-12-16 06:03:56.306025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.306044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.314669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f2510 00:35:22.609 [2024-12-16 06:03:56.315131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.315150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.324097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f9f68 00:35:22.609 [2024-12-16 06:03:56.324673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.324690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.333249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f8a50 00:35:22.609 [2024-12-16 06:03:56.334079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.334097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.343155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e3498 00:35:22.609 [2024-12-16 06:03:56.344347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.344367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.351811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e12d8 00:35:22.609 [2024-12-16 06:03:56.352744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.352763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.360705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e12d8 00:35:22.609 [2024-12-16 06:03:56.361631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.361649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.370004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f31b8 00:35:22.609 [2024-12-16 06:03:56.371169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.371187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.377368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de470 00:35:22.609 [2024-12-16 06:03:56.378037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.378055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.386439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef6a8 00:35:22.609 [2024-12-16 06:03:56.387105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.387123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.395451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaef0 00:35:22.609 [2024-12-16 06:03:56.396115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.396132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.404617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198dfdc0 00:35:22.609 [2024-12-16 06:03:56.405185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.405204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.412838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef270 00:35:22.609 [2024-12-16 06:03:56.413409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.413426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.422436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef270 00:35:22.609 [2024-12-16 06:03:56.423083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.423101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.431453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef270 00:35:22.609 [2024-12-16 06:03:56.432102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.432120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.440418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef270 00:35:22.609 [2024-12-16 06:03:56.441078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.441095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.450584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef270 00:35:22.609 [2024-12-16 06:03:56.451692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.451709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:22.609 [2024-12-16 06:03:56.459006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f31b8 00:35:22.609 [2024-12-16 06:03:56.459766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.609 [2024-12-16 06:03:56.459783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.468372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198df550 00:35:22.868 [2024-12-16 06:03:56.468941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.468958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.477120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de038 00:35:22.868 [2024-12-16 06:03:56.477815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.477833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.486105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f57b0 00:35:22.868 [2024-12-16 06:03:56.486753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.486771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:22.868 28191.00 IOPS, 110.12 MiB/s [2024-12-16T05:03:56.724Z] [2024-12-16 06:03:56.495487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e8088 00:35:22.868 [2024-12-16 06:03:56.496133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.496154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.504392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ef270 00:35:22.868 [2024-12-16 06:03:56.505107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.505126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.513647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fa3a0 00:35:22.868 [2024-12-16 06:03:56.514173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.514193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.523282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7970 00:35:22.868 [2024-12-16 06:03:56.524043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.524061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.532259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f57b0 00:35:22.868 [2024-12-16 06:03:56.533010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.533028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.541188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5ec8 00:35:22.868 [2024-12-16 06:03:56.541936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.541954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.550309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5a90 00:35:22.868 [2024-12-16 06:03:56.550927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.550945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.559481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f96f8 00:35:22.868 [2024-12-16 06:03:56.560430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.560447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.568497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaef0 00:35:22.868 [2024-12-16 06:03:56.569478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.569496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.577768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f31b8 00:35:22.868 [2024-12-16 06:03:56.578648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.578669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.586735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaef0 00:35:22.868 [2024-12-16 06:03:56.587590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.587607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:22.868 [2024-12-16 06:03:56.595821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ec840 00:35:22.868 [2024-12-16 06:03:56.596532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.868 [2024-12-16 06:03:56.596549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.604362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6890 00:35:22.869 [2024-12-16 06:03:56.605641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.605659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.613814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7100 00:35:22.869 [2024-12-16 06:03:56.614976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.614994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.623295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e01f8 00:35:22.869 [2024-12-16 06:03:56.624538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.624556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.632716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f8a50 00:35:22.869 [2024-12-16 06:03:56.634089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.634107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.641095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6890 00:35:22.869 [2024-12-16 06:03:56.642136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.642153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.649360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f1ca0 00:35:22.869 [2024-12-16 06:03:56.650614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.650631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.657704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5220 00:35:22.869 [2024-12-16 06:03:56.658410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.658427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.666716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f0bc0 00:35:22.869 [2024-12-16 06:03:56.667421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.667439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.675769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e9168 00:35:22.869 [2024-12-16 06:03:56.676456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.676474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.684836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eff18 00:35:22.869 [2024-12-16 06:03:56.685506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.685524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.693856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f9b30 00:35:22.869 [2024-12-16 06:03:56.694524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.694542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.702906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fe2e8 00:35:22.869 [2024-12-16 06:03:56.703591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.703609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.712066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e8088 00:35:22.869 [2024-12-16 06:03:56.712750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.712768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:22.869 [2024-12-16 06:03:56.721194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e9e10 00:35:22.869 [2024-12-16 06:03:56.721879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:22.869 [2024-12-16 06:03:56.721897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.730422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198df550 00:35:23.128 [2024-12-16 06:03:56.731150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.731168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.739481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e73e0 00:35:23.128 [2024-12-16 06:03:56.740189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.740206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.748527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eee38 00:35:23.128 [2024-12-16 06:03:56.749267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.749285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.757592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6458 00:35:23.128 [2024-12-16 06:03:56.758262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.758280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.766602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fc560 00:35:23.128 [2024-12-16 06:03:56.767313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.767330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.775906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f2d80 00:35:23.128 [2024-12-16 06:03:56.776588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.776605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.784988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ff3c8 00:35:23.128 [2024-12-16 06:03:56.785667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.785685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.794054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f3a28 00:35:23.128 [2024-12-16 06:03:56.794717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.794735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.803107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f3e60 00:35:23.128 [2024-12-16 06:03:56.803786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.803803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.812114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5658 00:35:23.128 [2024-12-16 06:03:56.812796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.812816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.821151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f0788 00:35:23.128 [2024-12-16 06:03:56.821855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.821873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.830097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f0350 00:35:23.128 [2024-12-16 06:03:56.830782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.830800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.839075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f8a50 00:35:23.128 [2024-12-16 06:03:56.839757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.839775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.848126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f81e0 00:35:23.128 [2024-12-16 06:03:56.848826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.848844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.857195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de470 00:35:23.128 [2024-12-16 06:03:56.857873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.857892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.866206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e0a68 00:35:23.128 [2024-12-16 06:03:56.866884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.866901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.875267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f1ca0 00:35:23.128 [2024-12-16 06:03:56.875949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.875966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.884294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f5be8 00:35:23.128 [2024-12-16 06:03:56.884963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.884981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.893323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e49b0 00:35:23.128 [2024-12-16 06:03:56.894011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.894029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.902368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198efae0 00:35:23.128 [2024-12-16 06:03:56.903079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.903097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.911363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e2c28 00:35:23.128 [2024-12-16 06:03:56.912070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.912087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.920398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5a90 00:35:23.128 [2024-12-16 06:03:56.921079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.921097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.128 [2024-12-16 06:03:56.929441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f4298 00:35:23.128 [2024-12-16 06:03:56.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.128 [2024-12-16 06:03:56.930140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.129 [2024-12-16 06:03:56.938444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e23b8 00:35:23.129 [2024-12-16 06:03:56.939114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.129 [2024-12-16 06:03:56.939132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.129 [2024-12-16 06:03:56.947506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ee5c8 00:35:23.129 [2024-12-16 06:03:56.948219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.129 [2024-12-16 06:03:56.948236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.129 [2024-12-16 06:03:56.956552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5220 00:35:23.129 [2024-12-16 06:03:56.957236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.129 [2024-12-16 06:03:56.957254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.129 [2024-12-16 06:03:56.964986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fd640 00:35:23.129 [2024-12-16 06:03:56.965635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.129 [2024-12-16 06:03:56.965652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:23.129 [2024-12-16 06:03:56.974448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f46d0 00:35:23.129 [2024-12-16 06:03:56.975230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.129 [2024-12-16 06:03:56.975248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:23.387 [2024-12-16 06:03:56.984101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fcdd0 00:35:23.387 [2024-12-16 06:03:56.985040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.387 [2024-12-16 06:03:56.985057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:23.387 [2024-12-16 06:03:56.993707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f92c0 00:35:23.387 [2024-12-16 06:03:56.994716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.387 [2024-12-16 06:03:56.994733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:23.387 [2024-12-16 06:03:57.002112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f5378 00:35:23.387 [2024-12-16 06:03:57.002782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.387 [2024-12-16 06:03:57.002800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.387 [2024-12-16 06:03:57.010984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7970 00:35:23.387 [2024-12-16 06:03:57.011648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.387 [2024-12-16 06:03:57.011666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.387 [2024-12-16 06:03:57.020038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f20d8 00:35:23.387 [2024-12-16 06:03:57.020720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.387 [2024-12-16 06:03:57.020737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.387 [2024-12-16 06:03:57.029291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e95a0 00:35:23.387 [2024-12-16 06:03:57.029963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.029981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.038279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ee190 00:35:23.388 [2024-12-16 06:03:57.038960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.038977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.047228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fbcf0 00:35:23.388 [2024-12-16 06:03:57.047899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.047920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.056272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fd640 00:35:23.388 [2024-12-16 06:03:57.056951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.065278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fc998 00:35:23.388 [2024-12-16 06:03:57.065962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.065979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.074314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e0630 00:35:23.388 [2024-12-16 06:03:57.075012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.075029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.083541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fe2e8 00:35:23.388 [2024-12-16 06:03:57.084024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.084041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.093011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ed4e8 00:35:23.388 [2024-12-16 06:03:57.093600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.093618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.102204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e84c0 00:35:23.388 [2024-12-16 06:03:57.103141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.103161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.111089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f5be8 00:35:23.388 [2024-12-16 06:03:57.112043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.112061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.120399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaef0 00:35:23.388 [2024-12-16 06:03:57.121121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.121139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.128907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f0350 00:35:23.388 [2024-12-16 06:03:57.130126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.130144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.137238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e4578 00:35:23.388 [2024-12-16 06:03:57.137921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.137939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.146266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eb760 00:35:23.388 [2024-12-16 06:03:57.146944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.146962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.155248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f8a50 00:35:23.388 [2024-12-16 06:03:57.155949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.155966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.163670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6020 00:35:23.388 [2024-12-16 06:03:57.164343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.164360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.173121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f35f0 00:35:23.388 [2024-12-16 06:03:57.173892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.173910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.182558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fd640 00:35:23.388 [2024-12-16 06:03:57.183460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.183477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.192013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fd208 00:35:23.388 [2024-12-16 06:03:57.193024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.193041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.200369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6890 00:35:23.388 [2024-12-16 06:03:57.201045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.201063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.209244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198eaab8 00:35:23.388 [2024-12-16 06:03:57.209934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.209951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.218283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e38d0 00:35:23.388 [2024-12-16 06:03:57.218982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.218998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.227289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e6b70 00:35:23.388 [2024-12-16 06:03:57.227956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.227973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.388 [2024-12-16 06:03:57.236294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e0a68 00:35:23.388 [2024-12-16 06:03:57.236962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.388 [2024-12-16 06:03:57.236980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.245598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f1ca0 00:35:23.647 [2024-12-16 06:03:57.246309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.246327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.254664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f6020 00:35:23.647 [2024-12-16 06:03:57.255340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.255358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.263674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5ec8 00:35:23.647 [2024-12-16 06:03:57.264367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.264384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.272734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fef90 00:35:23.647 [2024-12-16 06:03:57.273431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.273450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.281970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f8618 00:35:23.647 [2024-12-16 06:03:57.282640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.282661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.291024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e8088 00:35:23.647 [2024-12-16 06:03:57.291687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.291704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.300025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fe2e8 00:35:23.647 [2024-12-16 06:03:57.300711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.300732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.309016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f9b30 00:35:23.647 [2024-12-16 06:03:57.309687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.309704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.318309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f0ff8 00:35:23.647 [2024-12-16 06:03:57.318780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.318798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.327486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ee5c8 00:35:23.647 [2024-12-16 06:03:57.328278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.328296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.336496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e5220 00:35:23.647 [2024-12-16 06:03:57.337306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.337324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.345509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198df118 00:35:23.647 [2024-12-16 06:03:57.346310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.346328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.354496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e99d8 00:35:23.647 [2024-12-16 06:03:57.355310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.647 [2024-12-16 06:03:57.355327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.647 [2024-12-16 06:03:57.363551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198dece0 00:35:23.648 [2024-12-16 06:03:57.364270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.364290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.372591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198de038 00:35:23.648 [2024-12-16 06:03:57.373403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.373421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.381639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f7da8 00:35:23.648 [2024-12-16 06:03:57.382440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.382457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.390704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198efae0 00:35:23.648 [2024-12-16 06:03:57.391539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.391557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.399709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e2c28 00:35:23.648 [2024-12-16 06:03:57.400513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.400530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.408733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f9f68 00:35:23.648 [2024-12-16 06:03:57.409542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.409559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.418065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e0ea0 00:35:23.648 [2024-12-16 06:03:57.418656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.418673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.427236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198f5378 00:35:23.648 [2024-12-16 06:03:57.428160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.428178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.436267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fe720 00:35:23.648 [2024-12-16 06:03:57.437200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.437217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.445280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fa3a0 00:35:23.648 [2024-12-16 06:03:57.446213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.446230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.454280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fe2e8 00:35:23.648 [2024-12-16 06:03:57.455200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.455216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.462699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e8d30 00:35:23.648 [2024-12-16 06:03:57.463603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.463619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.472136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198ff3c8 00:35:23.648 [2024-12-16 06:03:57.473166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.473184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.481539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198fda78 00:35:23.648 [2024-12-16 06:03:57.482679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.482697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:23.648 [2024-12-16 06:03:57.491018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdbff10) with pdu=0x2000198e9e10 00:35:23.648 [2024-12-16 06:03:57.492278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:23.648 [2024-12-16 06:03:57.492296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:23.648 28237.00 IOPS, 110.30 MiB/s 00:35:23.648 Latency(us) 00:35:23.648 [2024-12-16T05:03:57.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.648 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.648 nvme0n1 : 2.00 28239.55 110.31 0.00 0.00 4528.15 2075.31 11297.16 00:35:23.648 [2024-12-16T05:03:57.504Z] =================================================================================================================== 00:35:23.648 [2024-12-16T05:03:57.504Z] Total : 28239.55 110.31 0.00 0.00 4528.15 2075.31 11297.16 00:35:23.648 { 00:35:23.648 "results": [ 00:35:23.648 { 00:35:23.648 "job": "nvme0n1", 00:35:23.648 "core_mask": "0x2", 00:35:23.648 "workload": "randwrite", 00:35:23.648 "status": "finished", 00:35:23.648 "queue_depth": 128, 00:35:23.648 "io_size": 4096, 00:35:23.648 "runtime": 2.00375, 00:35:23.648 "iops": 28239.55084217093, 00:35:23.648 "mibps": 110.3107454772302, 00:35:23.648 "io_failed": 0, 00:35:23.648 "io_timeout": 0, 00:35:23.648 "avg_latency_us": 4528.150241667613, 00:35:23.648 "min_latency_us": 2075.306666666667, 00:35:23.648 "max_latency_us": 11297.158095238095 00:35:23.648 } 00:35:23.648 ], 00:35:23.648 "core_count": 1 00:35:23.648 } 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:23.907 | .driver_specific 00:35:23.907 | .nvme_error 00:35:23.907 | .status_code 00:35:23.907 | .command_transient_transport_error' 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3568544 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3568544 ']' 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3568544 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:23.907 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3568544 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3568544' 00:35:24.165 killing process with pid 3568544 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3568544 00:35:24.165 Received shutdown signal, test time was about 2.000000 seconds 00:35:24.165 00:35:24.165 Latency(us) 00:35:24.165 [2024-12-16T05:03:58.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.165 [2024-12-16T05:03:58.021Z] =================================================================================================================== 00:35:24.165 [2024-12-16T05:03:58.021Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3568544 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3569150 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3569150 /var/tmp/bperf.sock 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3569150 ']' 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:24.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:24.165 06:03:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.165 [2024-12-16 06:03:57.986211] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:24.165 [2024-12-16 06:03:57.986256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569150 ] 00:35:24.165 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:24.165 Zero copy mechanism will not be used. 00:35:24.423 [2024-12-16 06:03:58.041538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.423 [2024-12-16 06:03:58.080940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.423 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.423 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:24.423 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.423 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.681 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:24.681 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.681 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.681 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.681 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.681 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.939 nvme0n1 00:35:24.939 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:24.939 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.939 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.939 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.939 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:24.939 06:03:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:25.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:25.198 Zero copy mechanism will not be used. 00:35:25.198 Running I/O for 2 seconds... 00:35:25.198 [2024-12-16 06:03:58.884234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.884498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.884525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.889459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.889701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.889728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.896486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.896732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.896754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.902931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.903186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.903208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.908739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.908982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.909002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.914299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.914542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.914561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.918774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.919020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.919040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.923260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.923499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.923519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.927669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.927916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.927935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.932121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.932361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.932380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.936518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.936765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.936784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.941177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.198 [2024-12-16 06:03:58.941424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.198 [2024-12-16 06:03:58.941443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.198 [2024-12-16 06:03:58.945719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.945967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.945987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.950244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.950489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.950509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.954795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.955041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.955061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.959427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.959669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.959688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.963880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.964124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.964143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.968269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.968512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.968531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.972673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.972923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.972943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.977369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.977611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.977630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.981992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.982236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.982254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.987083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.987325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.987343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.991604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.991862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.991881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:58.996456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:58.996693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:58.996712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.001377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.001618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.001637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.007639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.007883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.007902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.013122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.013364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.013383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.018089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.018329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.018351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.022865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.023107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.023127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.028445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.028684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.028703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.033294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.033534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.033553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.037685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.037929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.037949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.042238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.042479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.042498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.046605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.046844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.046869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.199 [2024-12-16 06:03:59.051032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.199 [2024-12-16 06:03:59.051278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.199 [2024-12-16 06:03:59.051298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.055449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.055688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.055707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.059876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.060123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.060143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.064371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.064610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.064628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.068692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.068941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.068961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.073729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.073992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.074014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.078250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.078494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.078514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.082614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.082865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.082884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.086948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.087211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.091282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.091525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.091544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.095658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.095905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.095924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.099854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.100102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.100122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.104423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.104670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.104691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.109431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.109684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.109704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.115794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.116042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.116061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.122447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.122692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.122712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.128161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.128402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.128421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.133623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.133883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.133903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.139079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.139320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.139340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.144879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.145122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.145146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.150078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.150319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.150338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.154691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.154950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.154969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.159735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.159984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.160004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.165268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.165523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.165542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.171378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.171620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.171639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.176440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.176689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.176708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.181260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.181528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.181547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.459 [2024-12-16 06:03:59.186292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.459 [2024-12-16 06:03:59.186532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.459 [2024-12-16 06:03:59.186551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.191049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.191310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.191329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.195740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.195988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.196008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.200613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.200862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.200881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.205780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.206042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.206061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.211299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.211541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.211560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.216394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.216637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.216656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.221321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.221561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.221581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.226498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.226742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.226761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.231994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.232239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.232258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.237308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.237380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.237396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.242433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.242657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.242676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.247314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.247540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.247560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.252069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.252311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.252331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.257054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.257278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.257298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.262730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.262949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.262968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.267685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.267905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.272726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.272945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.272965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.277427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.277640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.277662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.281922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.282138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.282157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.286289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.286517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.286536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.290701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.290922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.290941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.295206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.295418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.295437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.299370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.299588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.299607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.303567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.303780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.303800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.307756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.307977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.307997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.460 [2024-12-16 06:03:59.311999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.460 [2024-12-16 06:03:59.312218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.460 [2024-12-16 06:03:59.312238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.316455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.316673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.316692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.321595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.321810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.321829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.327572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.327860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.327879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.333351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.333578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.333597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.338143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.338346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.338365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.342703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.342914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.342933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.347352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.347563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.347582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.352079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.352285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.352305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.356818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.357029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.357048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.361594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.361799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.361819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.366394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.366601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.366620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.371158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.371409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.371429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.375796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.376015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.376034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.380142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.380346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.380365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.385220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.385520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.720 [2024-12-16 06:03:59.385540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.720 [2024-12-16 06:03:59.391425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.720 [2024-12-16 06:03:59.391637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.391657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.396577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.396834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.396860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.402679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.402949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.402972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.407710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.407919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.407938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.411940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.412155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.412173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.416632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.416841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.416867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.421515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.421721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.421740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.426828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.427039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.427058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.431376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.431581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.431600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.436110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.436316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.436334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.440796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.441007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.441026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.445634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.445832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.445858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.450886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.451089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.451108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.455262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.455469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.455489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.459569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.459772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.459792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.463631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.463843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.463869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.467956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.468159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.468178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.472289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.472497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.472516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.476769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.476983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.477002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.481215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.481431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.485636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.485840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.485867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.489923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.490128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.490146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.494145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.494349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.494368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.498531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.498741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.498759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.502776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.502991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.503011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.507170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.507377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.507396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.511659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.511872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.511901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.516269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.721 [2024-12-16 06:03:59.516482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.721 [2024-12-16 06:03:59.516501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.721 [2024-12-16 06:03:59.520541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.520754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.520772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.524506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.524715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.524734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.528710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.528942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.533463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.533708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.533727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.538934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.539134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.539153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.543357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.543564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.543583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.547701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.547915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.547934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.551946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.552151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.552170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.556189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.556401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.556419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.560535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.560740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.560759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.564827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.565036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.565055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.569109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.569314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.569332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.722 [2024-12-16 06:03:59.573495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.722 [2024-12-16 06:03:59.573707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.722 [2024-12-16 06:03:59.573726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.981 [2024-12-16 06:03:59.578051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.981 [2024-12-16 06:03:59.578254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.981 [2024-12-16 06:03:59.578273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.582393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.582614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.582633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.586696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.586905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.586924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.591008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.591215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.591234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.595382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.595585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.595608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.599810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.600019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.600038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.604457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.604662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.604681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.608699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.608907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.608926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.613133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.613337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.613355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.617372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.617574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.617593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.622113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.622314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.622333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.626905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.627129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.627147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.632318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.632525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.632543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.637262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.637470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.637489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.641484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.641691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.641710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.645726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.645939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.645959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.650468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.650676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.650696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.655190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.655400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.655419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.659230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.659436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.659455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.663559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.663767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.663786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.668502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.668708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.668727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.673100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.673313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.673332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.677442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.677647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.677666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.681998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.682220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.682239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.686290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.686496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.686514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.690571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.690779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.690798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.694928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.695137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.695155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.699229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.699434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.982 [2024-12-16 06:03:59.699452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.982 [2024-12-16 06:03:59.703545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.982 [2024-12-16 06:03:59.703753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.703774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.708144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.708350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.708370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.712617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.712824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.712855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.717130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.717340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.717360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.721072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.721283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.721301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.724998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.725205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.725224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.728906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.729115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.729134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.732805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.733018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.733038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.736705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.736916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.736935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.740614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.740824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.740842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.744499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.744701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.744720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.748420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.748631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.748649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.752303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.752508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.752527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.756608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.756810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.756829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.761434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.761640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.761658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.766045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.766253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.766272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.770379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.770583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.770601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.774811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.775016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.775035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.779373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.779579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.779598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.784003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.784210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.784228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.788377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.788580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.788599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.792763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.792973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.792992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.797098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.797299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.797318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.801480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.801687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.801705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.806192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.806394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.806413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.810676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.810890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.810909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.814878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.815088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.815106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.818818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.983 [2024-12-16 06:03:59.819029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.983 [2024-12-16 06:03:59.819048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.983 [2024-12-16 06:03:59.823749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.984 [2024-12-16 06:03:59.824046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.984 [2024-12-16 06:03:59.824069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.984 [2024-12-16 06:03:59.829357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.984 [2024-12-16 06:03:59.829646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.984 [2024-12-16 06:03:59.829665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.984 [2024-12-16 06:03:59.834371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:25.984 [2024-12-16 06:03:59.834581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.984 [2024-12-16 06:03:59.834601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.243 [2024-12-16 06:03:59.839171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.243 [2024-12-16 06:03:59.839421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.243 [2024-12-16 06:03:59.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.243 [2024-12-16 06:03:59.843867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.243 [2024-12-16 06:03:59.844070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.243 [2024-12-16 06:03:59.844089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.243 [2024-12-16 06:03:59.848829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.243 [2024-12-16 06:03:59.849069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.243 [2024-12-16 06:03:59.849088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.243 [2024-12-16 06:03:59.853709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.243 [2024-12-16 06:03:59.853929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.243 [2024-12-16 06:03:59.853949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.243 [2024-12-16 06:03:59.858758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.243 [2024-12-16 06:03:59.858969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.243 [2024-12-16 06:03:59.858988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.863522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.863730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.863749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.868148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.868354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.868373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.244 6536.00 IOPS, 817.00 MiB/s [2024-12-16T05:04:00.100Z] [2024-12-16 06:03:59.873696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.873921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.873940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.878190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.878396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.878414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.882280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.882485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.882504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.886626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.886831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.886856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.891361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.891567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.891586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.896512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.896722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.896741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.901358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.901570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.901589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.905835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.910191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.910397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.910416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.914624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.914831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.914854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.918806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.919018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.919037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.923697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.923906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.923926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.928374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.928576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.928595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.932763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.932972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.932991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.937096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.937299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.937318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.941407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.941612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.941631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.945279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.945485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.945507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.949213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.949419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.949439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.953096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.953302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.953337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.956987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.957193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.957213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.960840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.961051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.961070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.964724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.964931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.964950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.968592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.968794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.968813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.972475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.972676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.972695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.976333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.976534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.244 [2024-12-16 06:03:59.976553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.244 [2024-12-16 06:03:59.980208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.244 [2024-12-16 06:03:59.980414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:03:59.980433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:03:59.984124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:03:59.984332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:03:59.984351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:03:59.987963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:03:59.988168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:03:59.988187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:03:59.991780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:03:59.992001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:03:59.992020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:03:59.995626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:03:59.995828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:03:59.995852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:03:59.999440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:03:59.999641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:03:59.999660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.003355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.003567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.003586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.007751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.007965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.007984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.013719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.013929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.013949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.018551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.018761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.018780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.022993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.023197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.023215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.027352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.027560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.027579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.031573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.031779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.031798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.036280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.036514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.036535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.040824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.041043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.041063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.044957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.045168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.045188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.049057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.049263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.049282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.053120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.053326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.053352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.057249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.057454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.057474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.061334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.061543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.061562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.065466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.065674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.065694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.069518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.069723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.069743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.073589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.073815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.073835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.078030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.078239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.078258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.082126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.082332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.082352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.086158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.086365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.086385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.090241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.090448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.090468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.245 [2024-12-16 06:04:00.094308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.245 [2024-12-16 06:04:00.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.245 [2024-12-16 06:04:00.094534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.098340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.098549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.098569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.102373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.102584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.102605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.106404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.106608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.106628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.110469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.110684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.110703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.114491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.114700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.114719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.118560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.118769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.118789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.122590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.122798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.122821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.126664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.126879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.126898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.130699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.130910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.130929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.134746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.134956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.134975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.138793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.139006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.139025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.143035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.143248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.143267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.505 [2024-12-16 06:04:00.147372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.505 [2024-12-16 06:04:00.147580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.505 [2024-12-16 06:04:00.147600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.151566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.151775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.151794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.155600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.155808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.155828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.159800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.160013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.160033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.163956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.164156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.164174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.167914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.168120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.168139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.171882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.172087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.172106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.175922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.176130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.176149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.179926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.180136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.180155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.183968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.184177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.184196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.188027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.188230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.188249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.192042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.192259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.192277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.196074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.196282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.196301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.200102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.200330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.200349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.204163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.204372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.204391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.208578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.208784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.208803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.213248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.213463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.213482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.218774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.218984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.219005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.224010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.224216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.224235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.229023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.229232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.229251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.233667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.233882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.233904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.238009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.238220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.238239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.242929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.243138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.243157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.247977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.248204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.252888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.253097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.253116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.258089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.258320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.258339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.264471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.264766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.264785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.271202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.271486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.506 [2024-12-16 06:04:00.271505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.506 [2024-12-16 06:04:00.278136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.506 [2024-12-16 06:04:00.278408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.278427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.284871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.285157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.285177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.291409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.291514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.291533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.298049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.298241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.298261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.304936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.305248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.305269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.311685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.311996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.312015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.318403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.318655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.318674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.324074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.324265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.324285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.328647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.328852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.328872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.333183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.333375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.333394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.337411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.337606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.337625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.342321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.342519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.342539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.347239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.347434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.347453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.351627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.351822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.351842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.507 [2024-12-16 06:04:00.355626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.507 [2024-12-16 06:04:00.355830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.507 [2024-12-16 06:04:00.355856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.359430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.359627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.359647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.363216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.363416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.363435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.367045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.367245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.367263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.370837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.371050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.371074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.374595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.374793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.374812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.378372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.378561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.378580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.382191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.382393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.382412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.385960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.386162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.386181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.389703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.389906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.389926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.393467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.393667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.393687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.397312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.397513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.397533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.401118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.401314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.401335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.404897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.405101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.405122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.408647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.408845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.408871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.412496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.412696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.412715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.416951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.417142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.417161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.767 [2024-12-16 06:04:00.421917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.767 [2024-12-16 06:04:00.422114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.767 [2024-12-16 06:04:00.422133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.426308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.426502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.426521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.430458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.430652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.430671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.434560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.434761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.434780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.438789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.438990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.439009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.442991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.443187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.443206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.447378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.447586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.447605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.452891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.453226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.453245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.458591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.458823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.458843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.464900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.465125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.465144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.469905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.470101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.470120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.474590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.474788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.474808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.478964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.479159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.479178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.483262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.483461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.483484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.488386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.488582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.488601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.492891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.493086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.493105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.497888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.498082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.498101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.502525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.502718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.502738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.507208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.507409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.507429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.511991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.512189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.512208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.516593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.516786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.516806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.521220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.521412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.521431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.525830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.526048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.526068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.530590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.530790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.530810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.536064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.536266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.536286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.541248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.541446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.541465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.546029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.546223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.546244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.550815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.551015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.768 [2024-12-16 06:04:00.551035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.768 [2024-12-16 06:04:00.555386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.768 [2024-12-16 06:04:00.555587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.560509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.560742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.560761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.565412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.565621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.565640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.570000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.570195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.570215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.574234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.574436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.574455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.578176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.578374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.578394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.582020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.582217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.582237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.585885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.586085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.586104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.589735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.589937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.589957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.593588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.593797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.593817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.598031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.598226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.598245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.602605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.602797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.602820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.607685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.607890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.607910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.611809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.612012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.612031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.616055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.616251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.616270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.769 [2024-12-16 06:04:00.620256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:26.769 [2024-12-16 06:04:00.620453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.769 [2024-12-16 06:04:00.620472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.624429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.624620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.624639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.628578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.628772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.628791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.632780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.632984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.633003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.636740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.636941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.636961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.640896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.641099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.641119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.645598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.645780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.645800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.650187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.650372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.650391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.655191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.655376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.655397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.660266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.660442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.660461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.664437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.664625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.664644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.668689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.668881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.668900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.672616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.672803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.672823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.676516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.676694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.676713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.680524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.680699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.680718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.684864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.685046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.685064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.689371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.689555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.689591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.694407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.694586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.694606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.699403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.699584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.029 [2024-12-16 06:04:00.699603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.029 [2024-12-16 06:04:00.703542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.029 [2024-12-16 06:04:00.703730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.703750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.707534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.707721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.707740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.711808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.711996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.712016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.716245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.716426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.716449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.720384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.720566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.720585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.724284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.724469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.724488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.728213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.728397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.728415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.732122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.732310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.732330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.736318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.736507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.736525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.740433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.740615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.740634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.744504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.744684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.744703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.749316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.749508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.749528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.754401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.754589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.754608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.758616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.758803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.758822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.762861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.763065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.763084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.767344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.767528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.767547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.771503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.771684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.771703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.775628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.775808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.775827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.779723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.779919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.779938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.783859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.784046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.784065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.788087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.788269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.788292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.792311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.792498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.792518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.796545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.796732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.796752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.800761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.800963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.800982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.804958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.805137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.805155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.809188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.809372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.809391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.813494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.813678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.813697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.817332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.030 [2024-12-16 06:04:00.817511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.030 [2024-12-16 06:04:00.817529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.030 [2024-12-16 06:04:00.821195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.821382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.821401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.825014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.825200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.825225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.828803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.828991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.829010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.832646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.832827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.832853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.836434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.836613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.836633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.840243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.840424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.840443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.844032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.844222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.844241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.847880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.848061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.848080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.851644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.851828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.851854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.855518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.855710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.855730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.859456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.859643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.859663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.863407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.863596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.863615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.867322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.867516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.867537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:27.031 [2024-12-16 06:04:00.871261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.871455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.871474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:27.031 6802.00 IOPS, 850.25 MiB/s [2024-12-16T05:04:00.887Z] [2024-12-16 06:04:00.876037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdc0250) with pdu=0x2000198fef90 00:35:27.031 [2024-12-16 06:04:00.876190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.031 [2024-12-16 06:04:00.876210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:27.031 00:35:27.031 Latency(us) 00:35:27.031 [2024-12-16T05:04:00.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.031 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:27.031 nvme0n1 : 2.00 6800.32 850.04 0.00 0.00 2349.08 1802.24 12046.14 00:35:27.031 [2024-12-16T05:04:00.887Z] =================================================================================================================== 00:35:27.031 [2024-12-16T05:04:00.887Z] Total : 6800.32 850.04 0.00 0.00 2349.08 1802.24 12046.14 00:35:27.031 { 00:35:27.031 "results": [ 00:35:27.031 { 00:35:27.031 "job": "nvme0n1", 00:35:27.031 "core_mask": "0x2", 00:35:27.031 "workload": "randwrite", 00:35:27.031 "status": "finished", 00:35:27.031 "queue_depth": 16, 00:35:27.031 "io_size": 131072, 00:35:27.031 "runtime": 2.002846, 00:35:27.031 "iops": 6800.323140171536, 00:35:27.031 "mibps": 850.040392521442, 00:35:27.031 "io_failed": 0, 00:35:27.031 "io_timeout": 0, 00:35:27.031 "avg_latency_us": 2349.0816340116075, 00:35:27.031 "min_latency_us": 1802.24, 00:35:27.031 "max_latency_us": 12046.140952380953 00:35:27.031 } 00:35:27.031 ], 00:35:27.031 "core_count": 1 00:35:27.031 } 00:35:27.289 06:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:27.289 06:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:27.289 06:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:27.289 | .driver_specific 00:35:27.289 | .nvme_error 00:35:27.289 | .status_code 00:35:27.289 | .command_transient_transport_error' 00:35:27.289 06:04:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:27.289 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 439 > 0 )) 00:35:27.289 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3569150 00:35:27.289 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3569150 ']' 00:35:27.289 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3569150 00:35:27.289 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:27.289 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.289 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3569150 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3569150' 00:35:27.546 killing process with pid 3569150 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3569150 00:35:27.546 Received shutdown signal, test time was about 2.000000 seconds 00:35:27.546 00:35:27.546 Latency(us) 00:35:27.546 [2024-12-16T05:04:01.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.546 [2024-12-16T05:04:01.402Z] =================================================================================================================== 00:35:27.546 [2024-12-16T05:04:01.402Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3569150 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3567387 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3567387 ']' 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3567387 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3567387 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3567387' 00:35:27.546 killing process with pid 3567387 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3567387 00:35:27.546 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3567387 00:35:27.803 00:35:27.803 real 0m14.044s 00:35:27.803 user 0m26.702s 00:35:27.803 sys 0m4.635s 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.803 ************************************ 00:35:27.803 END TEST nvmf_digest_error 00:35:27.803 ************************************ 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.803 rmmod nvme_tcp 00:35:27.803 rmmod nvme_fabrics 00:35:27.803 rmmod nvme_keyring 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 3567387 ']' 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 3567387 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3567387 ']' 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3567387 00:35:27.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3567387) - No such process 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3567387 is not found' 00:35:27.803 Process with pid 3567387 is not found 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:27.803 06:04:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:30.335 00:35:30.335 real 0m36.217s 00:35:30.335 user 0m55.267s 00:35:30.335 sys 0m13.477s 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:30.335 ************************************ 00:35:30.335 END TEST nvmf_digest 00:35:30.335 ************************************ 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:30.335 06:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.335 ************************************ 00:35:30.335 START TEST nvmf_bdevperf 00:35:30.335 ************************************ 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:30.336 * Looking for test storage... 00:35:30.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:30.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:30.336 06:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:30.336 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.337 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.337 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.337 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:30.337 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:30.337 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:30.337 06:04:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:35.603 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:35.603 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:35.603 Found net devices under 0000:af:00.0: cvl_0_0 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:35.603 Found net devices under 0000:af:00.1: cvl_0_1 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:35.603 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:35.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:35.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:35:35.604 00:35:35.604 --- 10.0.0.2 ping statistics --- 00:35:35.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.604 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:35.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:35.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:35:35.604 00:35:35.604 --- 10.0.0.1 ping statistics --- 00:35:35.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.604 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3573092 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3573092 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3573092 ']' 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.604 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:35.863 [2024-12-16 06:04:09.459446] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:35.863 [2024-12-16 06:04:09.459492] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.863 [2024-12-16 06:04:09.520850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.863 [2024-12-16 06:04:09.562546] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.863 [2024-12-16 06:04:09.562582] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.863 [2024-12-16 06:04:09.562589] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.863 [2024-12-16 06:04:09.562595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.863 [2024-12-16 06:04:09.562600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.863 [2024-12-16 06:04:09.562633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.863 [2024-12-16 06:04:09.562706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:35.863 [2024-12-16 06:04:09.562708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.863 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.863 [2024-12-16 06:04:09.697342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.122 Malloc0 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.122 [2024-12-16 06:04:09.767714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:36.122 { 00:35:36.122 "params": { 00:35:36.122 "name": "Nvme$subsystem", 00:35:36.122 "trtype": "$TEST_TRANSPORT", 00:35:36.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.122 "adrfam": "ipv4", 00:35:36.122 "trsvcid": "$NVMF_PORT", 00:35:36.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.122 "hdgst": ${hdgst:-false}, 00:35:36.122 "ddgst": ${ddgst:-false} 00:35:36.122 }, 00:35:36.122 "method": "bdev_nvme_attach_controller" 00:35:36.122 } 00:35:36.122 EOF 00:35:36.122 )") 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:36.122 06:04:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:36.122 "params": { 00:35:36.122 "name": "Nvme1", 00:35:36.122 "trtype": "tcp", 00:35:36.122 "traddr": "10.0.0.2", 00:35:36.122 "adrfam": "ipv4", 00:35:36.122 "trsvcid": "4420", 00:35:36.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.122 "hdgst": false, 00:35:36.122 "ddgst": false 00:35:36.122 }, 00:35:36.122 "method": "bdev_nvme_attach_controller" 00:35:36.122 }' 00:35:36.122 [2024-12-16 06:04:09.817937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:36.122 [2024-12-16 06:04:09.817979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573169 ] 00:35:36.122 [2024-12-16 06:04:09.872566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.122 [2024-12-16 06:04:09.913199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.381 Running I/O for 1 seconds... 00:35:37.318 11225.00 IOPS, 43.85 MiB/s 00:35:37.318 Latency(us) 00:35:37.318 [2024-12-16T05:04:11.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.318 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:37.318 Verification LBA range: start 0x0 length 0x4000 00:35:37.318 Nvme1n1 : 1.01 11289.73 44.10 0.00 0.00 11294.86 869.91 12857.54 00:35:37.318 [2024-12-16T05:04:11.174Z] =================================================================================================================== 00:35:37.318 [2024-12-16T05:04:11.174Z] Total : 11289.73 44.10 0.00 0.00 11294.86 869.91 12857.54 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3573390 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:37.579 { 00:35:37.579 "params": { 00:35:37.579 "name": "Nvme$subsystem", 00:35:37.579 "trtype": "$TEST_TRANSPORT", 00:35:37.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.579 "adrfam": "ipv4", 00:35:37.579 "trsvcid": "$NVMF_PORT", 00:35:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.579 "hdgst": ${hdgst:-false}, 00:35:37.579 "ddgst": ${ddgst:-false} 00:35:37.579 }, 00:35:37.579 "method": "bdev_nvme_attach_controller" 00:35:37.579 } 00:35:37.579 EOF 00:35:37.579 )") 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:37.579 06:04:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:37.579 "params": { 00:35:37.579 "name": "Nvme1", 00:35:37.579 "trtype": "tcp", 00:35:37.579 "traddr": "10.0.0.2", 00:35:37.579 "adrfam": "ipv4", 00:35:37.579 "trsvcid": "4420", 00:35:37.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.579 "hdgst": false, 00:35:37.579 "ddgst": false 00:35:37.579 }, 00:35:37.579 "method": "bdev_nvme_attach_controller" 00:35:37.579 }' 00:35:37.579 [2024-12-16 06:04:11.338333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:37.579 [2024-12-16 06:04:11.338380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573390 ] 00:35:37.579 [2024-12-16 06:04:11.393649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.579 [2024-12-16 06:04:11.430091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.839 Running I/O for 15 seconds... 00:35:39.784 11155.00 IOPS, 43.57 MiB/s [2024-12-16T05:04:14.580Z] 11196.00 IOPS, 43.73 MiB/s [2024-12-16T05:04:14.580Z] 06:04:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3573092 00:35:40.724 06:04:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:40.724 [2024-12-16 06:04:14.309227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.724 [2024-12-16 06:04:14.309412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.724 [2024-12-16 06:04:14.309540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.724 [2024-12-16 06:04:14.309549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.309984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.309993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.725 [2024-12-16 06:04:14.310121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.725 [2024-12-16 06:04:14.310128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:40.726 [2024-12-16 06:04:14.310473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.726 [2024-12-16 06:04:14.310678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.726 [2024-12-16 06:04:14.310686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.310992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.310999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.727 [2024-12-16 06:04:14.311287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3560 is same with the state(6) to be set 00:35:40.727 [2024-12-16 06:04:14.311303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:40.727 [2024-12-16 06:04:14.311308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:40.727 [2024-12-16 06:04:14.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107928 len:8 PRP1 0x0 PRP2 0x0 00:35:40.727 [2024-12-16 06:04:14.311321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311364] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8a3560 was disconnected and freed. reset controller. 00:35:40.727 [2024-12-16 06:04:14.311410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.727 [2024-12-16 06:04:14.311419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.727 [2024-12-16 06:04:14.311434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.727 [2024-12-16 06:04:14.311448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:40.727 [2024-12-16 06:04:14.311461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:40.727 [2024-12-16 06:04:14.311468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.727 [2024-12-16 06:04:14.314216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.314243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.314864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.314914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.314939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.315394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.315567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.315575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.315583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.318331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.327209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.327649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.327692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.327718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.328268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.328436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.328444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.328451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.332694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.340988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.341453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.341500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.341524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.342060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.342242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.342251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.342257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.345176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.353839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.354128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.354144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.354154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.354312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.354470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.354477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.354483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.357098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.366641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.367074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.367119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.367143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.367602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.367760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.367767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.367773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.370385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.379487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.379854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.379870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.379877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.380043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.380212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.380219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.380225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.382795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.392290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.392622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.392637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.392644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.392801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.392987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.392998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.393004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.395597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.405085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.405515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.405560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.405584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.406179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.406599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.406606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.406613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.409203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.417909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.418336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.418380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.418404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.418999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.419525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.419533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.728 [2024-12-16 06:04:14.419539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.728 [2024-12-16 06:04:14.423901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.728 [2024-12-16 06:04:14.431750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.728 [2024-12-16 06:04:14.432212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.728 [2024-12-16 06:04:14.432229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.728 [2024-12-16 06:04:14.432237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.728 [2024-12-16 06:04:14.432419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.728 [2024-12-16 06:04:14.432601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.728 [2024-12-16 06:04:14.432609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.432616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.435530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.444602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.445032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.445048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.445055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.445212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.445369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.445376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.445382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.447996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.457388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.457772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.457816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.457839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.458332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.458504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.458512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.458518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.461131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.470174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.470507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.470522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.470529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.470686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.470844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.470856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.470862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.473467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.483017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.483394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.483410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.483417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.483587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.483754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.483762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.483768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.486438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.495830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.496270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.496316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.496339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.496777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.496966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.496974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.496981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.499592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.508665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.509111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.509126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.509133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.509291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.509449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.509456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.509462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.512074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.521506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.521938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.521983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.522006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.522510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.522668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.522675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.522684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.525301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.534244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.534700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.534744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.534767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.535362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.535822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.535829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.535835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.729 [2024-12-16 06:04:14.538428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.729 [2024-12-16 06:04:14.547064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.729 [2024-12-16 06:04:14.547523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.729 [2024-12-16 06:04:14.547568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.729 [2024-12-16 06:04:14.547592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.729 [2024-12-16 06:04:14.548021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.729 [2024-12-16 06:04:14.548190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.729 [2024-12-16 06:04:14.548198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.729 [2024-12-16 06:04:14.548204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.730 [2024-12-16 06:04:14.550829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.730 [2024-12-16 06:04:14.560112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.730 [2024-12-16 06:04:14.560424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.730 [2024-12-16 06:04:14.560469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.730 [2024-12-16 06:04:14.560494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.730 [2024-12-16 06:04:14.560979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.730 [2024-12-16 06:04:14.561152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.730 [2024-12-16 06:04:14.561160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.730 [2024-12-16 06:04:14.561167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.730 [2024-12-16 06:04:14.565450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.730 [2024-12-16 06:04:14.573697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.730 [2024-12-16 06:04:14.574000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.730 [2024-12-16 06:04:14.574018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.730 [2024-12-16 06:04:14.574025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.730 [2024-12-16 06:04:14.574207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.730 [2024-12-16 06:04:14.574389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.730 [2024-12-16 06:04:14.574397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.730 [2024-12-16 06:04:14.574404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.990 [2024-12-16 06:04:14.577317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.990 [2024-12-16 06:04:14.586773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.990 [2024-12-16 06:04:14.587189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.990 [2024-12-16 06:04:14.587207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.990 [2024-12-16 06:04:14.587214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.990 [2024-12-16 06:04:14.587386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.990 [2024-12-16 06:04:14.587558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.990 [2024-12-16 06:04:14.587566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.990 [2024-12-16 06:04:14.587572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.990 [2024-12-16 06:04:14.590327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.990 [2024-12-16 06:04:14.599760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.990 [2024-12-16 06:04:14.600189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.990 [2024-12-16 06:04:14.600233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.990 [2024-12-16 06:04:14.600256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.990 [2024-12-16 06:04:14.600693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.990 [2024-12-16 06:04:14.600866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.990 [2024-12-16 06:04:14.600874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.990 [2024-12-16 06:04:14.600880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.990 [2024-12-16 06:04:14.603470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.990 [2024-12-16 06:04:14.612569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.990 [2024-12-16 06:04:14.612980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.990 [2024-12-16 06:04:14.612997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.990 [2024-12-16 06:04:14.613004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.990 [2024-12-16 06:04:14.613171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.990 [2024-12-16 06:04:14.613342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.990 [2024-12-16 06:04:14.613349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.990 [2024-12-16 06:04:14.613356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.990 [2024-12-16 06:04:14.615982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.990 [2024-12-16 06:04:14.625373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.990 [2024-12-16 06:04:14.625792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.990 [2024-12-16 06:04:14.625807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.990 [2024-12-16 06:04:14.625814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.990 [2024-12-16 06:04:14.625999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.990 [2024-12-16 06:04:14.626166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.990 [2024-12-16 06:04:14.626173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.990 [2024-12-16 06:04:14.626180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.990 [2024-12-16 06:04:14.628839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.990 9937.33 IOPS, 38.82 MiB/s [2024-12-16T05:04:14.846Z] [2024-12-16 06:04:14.639342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.990 [2024-12-16 06:04:14.639801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.990 [2024-12-16 06:04:14.639861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.990 [2024-12-16 06:04:14.639887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.990 [2024-12-16 06:04:14.640240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.990 [2024-12-16 06:04:14.640406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.990 [2024-12-16 06:04:14.640414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.990 [2024-12-16 06:04:14.640420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.990 [2024-12-16 06:04:14.643010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.990 [2024-12-16 06:04:14.652059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.990 [2024-12-16 06:04:14.652487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.990 [2024-12-16 06:04:14.652532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.990 [2024-12-16 06:04:14.652555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.990 [2024-12-16 06:04:14.653037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.990 [2024-12-16 06:04:14.653204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.990 [2024-12-16 06:04:14.653211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.653221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.655807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.664821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.665215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.665230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.665237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.665394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.665552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.665559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.665565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.668175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.677660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.678103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.678118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.678126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.678293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.678459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.678466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.678473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.681085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.690392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.690810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.690826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.690833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.691004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.691172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.691179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.691186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.693774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.703222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.703682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.703734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.703758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.704317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.704484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.704491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.704497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.707090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.716014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.716361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.716378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.716385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.716543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.716702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.716709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.716715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.719315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.728820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.729244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.729259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.729266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.729423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.729580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.729587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.729593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.732192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.741548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.742006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.742065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.742090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.742669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.743271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.743299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.743320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.747771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.755338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.755785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.755830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.755869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.756373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.756557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.756565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.756572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.759482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.768173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.768534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.768579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.768603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.769114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.769282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.769289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.769295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.991 [2024-12-16 06:04:14.771901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.991 [2024-12-16 06:04:14.781084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.991 [2024-12-16 06:04:14.781545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.991 [2024-12-16 06:04:14.781588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.991 [2024-12-16 06:04:14.781611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.991 [2024-12-16 06:04:14.782101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.991 [2024-12-16 06:04:14.782272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.991 [2024-12-16 06:04:14.782280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.991 [2024-12-16 06:04:14.782286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.992 [2024-12-16 06:04:14.784909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.992 [2024-12-16 06:04:14.793965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.992 [2024-12-16 06:04:14.794336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.992 [2024-12-16 06:04:14.794351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.992 [2024-12-16 06:04:14.794359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.992 [2024-12-16 06:04:14.794525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.992 [2024-12-16 06:04:14.794691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.992 [2024-12-16 06:04:14.794700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.992 [2024-12-16 06:04:14.794706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.992 [2024-12-16 06:04:14.797376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.992 [2024-12-16 06:04:14.806711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.992 [2024-12-16 06:04:14.807071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.992 [2024-12-16 06:04:14.807087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.992 [2024-12-16 06:04:14.807095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.992 [2024-12-16 06:04:14.807261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.992 [2024-12-16 06:04:14.807428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.992 [2024-12-16 06:04:14.807435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.992 [2024-12-16 06:04:14.807441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.992 [2024-12-16 06:04:14.810037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.992 [2024-12-16 06:04:14.819527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.992 [2024-12-16 06:04:14.819927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.992 [2024-12-16 06:04:14.819944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.992 [2024-12-16 06:04:14.819952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.992 [2024-12-16 06:04:14.820127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.992 [2024-12-16 06:04:14.820285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.992 [2024-12-16 06:04:14.820294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.992 [2024-12-16 06:04:14.820300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.992 [2024-12-16 06:04:14.822950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.992 [2024-12-16 06:04:14.832540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.992 [2024-12-16 06:04:14.832902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.992 [2024-12-16 06:04:14.832919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:40.992 [2024-12-16 06:04:14.832930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:40.992 [2024-12-16 06:04:14.833101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:40.992 [2024-12-16 06:04:14.833273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.992 [2024-12-16 06:04:14.833281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.992 [2024-12-16 06:04:14.833287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.992 [2024-12-16 06:04:14.836032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.252 [2024-12-16 06:04:14.845582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.252 [2024-12-16 06:04:14.845888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.252 [2024-12-16 06:04:14.845905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.252 [2024-12-16 06:04:14.845913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.252 [2024-12-16 06:04:14.846084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.252 [2024-12-16 06:04:14.846266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.252 [2024-12-16 06:04:14.846275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.252 [2024-12-16 06:04:14.846282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.252 [2024-12-16 06:04:14.849042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.252 [2024-12-16 06:04:14.858502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.252 [2024-12-16 06:04:14.858912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.252 [2024-12-16 06:04:14.858928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.252 [2024-12-16 06:04:14.858936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.252 [2024-12-16 06:04:14.859108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.252 [2024-12-16 06:04:14.859279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.252 [2024-12-16 06:04:14.859288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.252 [2024-12-16 06:04:14.859294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.252 [2024-12-16 06:04:14.862004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.252 [2024-12-16 06:04:14.871514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.252 [2024-12-16 06:04:14.871872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.252 [2024-12-16 06:04:14.871889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.252 [2024-12-16 06:04:14.871896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.252 [2024-12-16 06:04:14.872067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.252 [2024-12-16 06:04:14.872239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.252 [2024-12-16 06:04:14.872249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.252 [2024-12-16 06:04:14.872256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.252 [2024-12-16 06:04:14.874955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.252 [2024-12-16 06:04:14.884485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.252 [2024-12-16 06:04:14.884794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.252 [2024-12-16 06:04:14.884810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.252 [2024-12-16 06:04:14.884817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.252 [2024-12-16 06:04:14.884994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.252 [2024-12-16 06:04:14.885166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.252 [2024-12-16 06:04:14.885174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.252 [2024-12-16 06:04:14.885181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.252 [2024-12-16 06:04:14.887860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.252 [2024-12-16 06:04:14.897446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.252 [2024-12-16 06:04:14.897729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.252 [2024-12-16 06:04:14.897745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.252 [2024-12-16 06:04:14.897753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.252 [2024-12-16 06:04:14.897928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.252 [2024-12-16 06:04:14.898101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.252 [2024-12-16 06:04:14.898119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.252 [2024-12-16 06:04:14.898126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.252 [2024-12-16 06:04:14.900818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.252 [2024-12-16 06:04:14.910425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.252 [2024-12-16 06:04:14.910781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.252 [2024-12-16 06:04:14.910797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.252 [2024-12-16 06:04:14.910805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.252 [2024-12-16 06:04:14.910981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.252 [2024-12-16 06:04:14.911153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.252 [2024-12-16 06:04:14.911160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.252 [2024-12-16 06:04:14.911167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.252 [2024-12-16 06:04:14.913896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.252 [2024-12-16 06:04:14.923386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.252 [2024-12-16 06:04:14.923760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:14.923805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:14.923828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:14.924420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:14.924913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:14.924926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:14.924936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:14.929372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:14.937264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:14.937612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:14.937629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:14.937638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:14.937819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:14.938007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:14.938016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:14.938023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:14.940941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:14.950164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:14.950467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:14.950483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:14.950491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:14.950657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:14.950824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:14.950832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:14.950838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:14.953504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:14.963049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:14.963381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:14.963398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:14.963405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:14.963575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:14.963741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:14.963749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:14.963755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:14.966377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:14.975902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:14.976186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:14.976203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:14.976210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:14.976381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:14.976553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:14.976561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:14.976567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:14.979194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:14.988693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:14.988987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:14.989032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:14.989056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:14.989633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:14.990079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:14.990087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:14.990093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:14.992745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:15.001532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:15.001825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:15.001841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:15.001853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:15.002021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:15.002188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:15.002195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:15.002205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:15.004869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:15.014395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:15.014763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:15.014781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:15.014789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:15.014960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:15.015127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:15.015135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:15.015141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:15.017797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:15.027282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:15.027641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:15.027684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:15.027708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:15.028299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:15.028885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:15.028893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:15.028900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:15.031497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:15.040184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:15.040471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:15.040488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:15.040495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:15.040661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:15.040827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.253 [2024-12-16 06:04:15.040835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.253 [2024-12-16 06:04:15.040841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.253 [2024-12-16 06:04:15.043440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.253 [2024-12-16 06:04:15.053097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.253 [2024-12-16 06:04:15.053379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.253 [2024-12-16 06:04:15.053397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.253 [2024-12-16 06:04:15.053405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.253 [2024-12-16 06:04:15.053570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.253 [2024-12-16 06:04:15.053737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.254 [2024-12-16 06:04:15.053744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.254 [2024-12-16 06:04:15.053750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.254 [2024-12-16 06:04:15.056348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.254 [2024-12-16 06:04:15.065949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.254 [2024-12-16 06:04:15.066329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.254 [2024-12-16 06:04:15.066345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.254 [2024-12-16 06:04:15.066352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.254 [2024-12-16 06:04:15.066518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.254 [2024-12-16 06:04:15.066685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.254 [2024-12-16 06:04:15.066692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.254 [2024-12-16 06:04:15.066698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.254 [2024-12-16 06:04:15.069364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.254 [2024-12-16 06:04:15.078917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.254 [2024-12-16 06:04:15.079253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.254 [2024-12-16 06:04:15.079269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.254 [2024-12-16 06:04:15.079276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.254 [2024-12-16 06:04:15.079447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.254 [2024-12-16 06:04:15.079619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.254 [2024-12-16 06:04:15.079626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.254 [2024-12-16 06:04:15.079633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.254 [2024-12-16 06:04:15.082396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.254 [2024-12-16 06:04:15.091930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.254 [2024-12-16 06:04:15.092319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.254 [2024-12-16 06:04:15.092334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.254 [2024-12-16 06:04:15.092341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.254 [2024-12-16 06:04:15.092507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.254 [2024-12-16 06:04:15.092679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.254 [2024-12-16 06:04:15.092686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.254 [2024-12-16 06:04:15.092692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.254 [2024-12-16 06:04:15.095356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.254 [2024-12-16 06:04:15.105102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.254 [2024-12-16 06:04:15.105494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.254 [2024-12-16 06:04:15.105509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.254 [2024-12-16 06:04:15.105517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.254 [2024-12-16 06:04:15.105689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.254 [2024-12-16 06:04:15.105865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.254 [2024-12-16 06:04:15.105873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.254 [2024-12-16 06:04:15.105880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.512 [2024-12-16 06:04:15.108617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.512 [2024-12-16 06:04:15.118048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.512 [2024-12-16 06:04:15.118316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.512 [2024-12-16 06:04:15.118331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.512 [2024-12-16 06:04:15.118338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.512 [2024-12-16 06:04:15.118504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.512 [2024-12-16 06:04:15.118672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.512 [2024-12-16 06:04:15.118679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.512 [2024-12-16 06:04:15.118685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.512 [2024-12-16 06:04:15.121349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.512 [2024-12-16 06:04:15.130943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.512 [2024-12-16 06:04:15.131231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.512 [2024-12-16 06:04:15.131246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.512 [2024-12-16 06:04:15.131253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.512 [2024-12-16 06:04:15.131427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.512 [2024-12-16 06:04:15.131594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.512 [2024-12-16 06:04:15.131601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.512 [2024-12-16 06:04:15.131609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.512 [2024-12-16 06:04:15.134275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.512 [2024-12-16 06:04:15.143876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.144205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.144221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.144228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.144395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.144562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.144568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.144575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.147296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.156880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.157207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.157222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.157229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.157396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.157562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.157568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.157575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.160244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.169893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.170232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.170248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.170255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.170422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.170589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.170595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.170602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.173267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.182751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.183108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.183124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.183134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.183301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.183466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.183473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.183479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.186142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.195721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.196097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.196113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.196120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.196287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.196453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.196460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.196466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.199182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.208608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.208894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.208909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.208917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.209083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.209249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.209256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.209263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.211928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.221509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.221931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.221947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.221954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.222121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.222287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.222297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.222303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.224970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.234571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.234933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.234950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.234957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.235129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.235300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.235308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.235315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.238056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.247515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.247896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.247940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.247963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.248518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.248794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.248806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.248816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.253253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.261291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.261741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.261758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.261766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.261953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.513 [2024-12-16 06:04:15.262149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.513 [2024-12-16 06:04:15.262157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.513 [2024-12-16 06:04:15.262164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.513 [2024-12-16 06:04:15.265078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.513 [2024-12-16 06:04:15.274117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.513 [2024-12-16 06:04:15.274550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.513 [2024-12-16 06:04:15.274593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.513 [2024-12-16 06:04:15.274616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.513 [2024-12-16 06:04:15.275210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.275629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.275636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.275642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.278263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.514 [2024-12-16 06:04:15.287075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.514 [2024-12-16 06:04:15.287416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.514 [2024-12-16 06:04:15.287432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.514 [2024-12-16 06:04:15.287439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.514 [2024-12-16 06:04:15.287605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.287772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.287780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.287786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.290387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.514 [2024-12-16 06:04:15.299878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.514 [2024-12-16 06:04:15.300332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.514 [2024-12-16 06:04:15.300375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.514 [2024-12-16 06:04:15.300397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.514 [2024-12-16 06:04:15.300877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.301044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.301052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.301058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.303637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.514 [2024-12-16 06:04:15.312724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.514 [2024-12-16 06:04:15.313137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.514 [2024-12-16 06:04:15.313152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.514 [2024-12-16 06:04:15.313159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.514 [2024-12-16 06:04:15.313329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.313496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.313503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.313509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.316170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.514 [2024-12-16 06:04:15.325439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.514 [2024-12-16 06:04:15.325924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.514 [2024-12-16 06:04:15.325940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.514 [2024-12-16 06:04:15.325948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.514 [2024-12-16 06:04:15.326115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.326282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.326290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.326298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.328960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.514 [2024-12-16 06:04:15.338231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.514 [2024-12-16 06:04:15.338674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.514 [2024-12-16 06:04:15.338691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.514 [2024-12-16 06:04:15.338698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.514 [2024-12-16 06:04:15.338871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.339038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.339046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.339052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.341641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.514 [2024-12-16 06:04:15.351034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.514 [2024-12-16 06:04:15.351472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.514 [2024-12-16 06:04:15.351517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.514 [2024-12-16 06:04:15.351540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.514 [2024-12-16 06:04:15.351950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.352117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.352125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.352134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.354722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.514 [2024-12-16 06:04:15.363938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.514 [2024-12-16 06:04:15.364371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.514 [2024-12-16 06:04:15.364387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.514 [2024-12-16 06:04:15.364395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.514 [2024-12-16 06:04:15.364565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.514 [2024-12-16 06:04:15.364737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.514 [2024-12-16 06:04:15.364744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.514 [2024-12-16 06:04:15.364751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.514 [2024-12-16 06:04:15.367492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.773 [2024-12-16 06:04:15.376954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.773 [2024-12-16 06:04:15.377297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.773 [2024-12-16 06:04:15.377313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.773 [2024-12-16 06:04:15.377320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.773 [2024-12-16 06:04:15.377486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.773 [2024-12-16 06:04:15.377654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.773 [2024-12-16 06:04:15.377661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.773 [2024-12-16 06:04:15.377667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.773 [2024-12-16 06:04:15.380272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.773 [2024-12-16 06:04:15.389661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.773 [2024-12-16 06:04:15.390059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.773 [2024-12-16 06:04:15.390074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.773 [2024-12-16 06:04:15.390081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.773 [2024-12-16 06:04:15.390238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.773 [2024-12-16 06:04:15.390396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.773 [2024-12-16 06:04:15.390403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.773 [2024-12-16 06:04:15.390409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.773 [2024-12-16 06:04:15.392995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.773 [2024-12-16 06:04:15.402500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.773 [2024-12-16 06:04:15.402935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.773 [2024-12-16 06:04:15.402987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.773 [2024-12-16 06:04:15.403011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.773 [2024-12-16 06:04:15.403591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.773 [2024-12-16 06:04:15.404014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.773 [2024-12-16 06:04:15.404022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.773 [2024-12-16 06:04:15.404028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.773 [2024-12-16 06:04:15.406619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.773 [2024-12-16 06:04:15.415246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.773 [2024-12-16 06:04:15.415704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.773 [2024-12-16 06:04:15.415749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.773 [2024-12-16 06:04:15.415771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.773 [2024-12-16 06:04:15.416170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.773 [2024-12-16 06:04:15.416337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.773 [2024-12-16 06:04:15.416344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.773 [2024-12-16 06:04:15.416351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.773 [2024-12-16 06:04:15.418941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.773 [2024-12-16 06:04:15.427997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.773 [2024-12-16 06:04:15.428349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.773 [2024-12-16 06:04:15.428392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.773 [2024-12-16 06:04:15.428415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.773 [2024-12-16 06:04:15.428895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.773 [2024-12-16 06:04:15.429062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.773 [2024-12-16 06:04:15.429070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.773 [2024-12-16 06:04:15.429076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.773 [2024-12-16 06:04:15.431665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.773 [2024-12-16 06:04:15.440738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.773 [2024-12-16 06:04:15.441178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.773 [2024-12-16 06:04:15.441194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.773 [2024-12-16 06:04:15.441201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.773 [2024-12-16 06:04:15.441367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.773 [2024-12-16 06:04:15.441537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.773 [2024-12-16 06:04:15.441544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.773 [2024-12-16 06:04:15.441550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.444169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.453502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.453897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.453913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.453919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.454077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.454235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.454242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.454248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.456858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.466339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.466692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.466708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.466715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.466888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.467055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.467062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.467069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.469658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.479168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.479568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.479583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.479590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.479748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.479929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.479937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.479943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.482539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.491993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.492310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.492325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.492333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.492499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.492666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.492673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.492679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.495397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.504744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.505192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.505237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.505260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.505839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.506286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.506293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.506299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.508927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.517670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.518018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.518034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.518041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.518208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.518375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.518382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.518388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.521007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.530397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.530791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.530807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.530816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.531001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.531169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.531176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.531182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.533779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.543139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.543554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.543569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.543576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.543742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.543920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.543929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.543935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.546524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.555913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.556303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.556318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.556325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.556482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.556640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.556647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.556653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.559267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.568778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.569170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.569186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.774 [2024-12-16 06:04:15.569194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.774 [2024-12-16 06:04:15.569365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.774 [2024-12-16 06:04:15.569537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.774 [2024-12-16 06:04:15.569548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.774 [2024-12-16 06:04:15.569554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.774 [2024-12-16 06:04:15.572192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.774 [2024-12-16 06:04:15.581560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.774 [2024-12-16 06:04:15.581992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.774 [2024-12-16 06:04:15.582037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.775 [2024-12-16 06:04:15.582060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.775 [2024-12-16 06:04:15.582637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.775 [2024-12-16 06:04:15.583229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.775 [2024-12-16 06:04:15.583257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.775 [2024-12-16 06:04:15.583282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.775 [2024-12-16 06:04:15.586047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.775 [2024-12-16 06:04:15.594694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.775 [2024-12-16 06:04:15.595064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.775 [2024-12-16 06:04:15.595080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.775 [2024-12-16 06:04:15.595088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.775 [2024-12-16 06:04:15.595258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.775 [2024-12-16 06:04:15.595430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.775 [2024-12-16 06:04:15.595439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.775 [2024-12-16 06:04:15.595447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.775 [2024-12-16 06:04:15.598166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.775 [2024-12-16 06:04:15.607673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.775 [2024-12-16 06:04:15.607987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.775 [2024-12-16 06:04:15.608003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.775 [2024-12-16 06:04:15.608011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.775 [2024-12-16 06:04:15.608190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.775 [2024-12-16 06:04:15.608356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.775 [2024-12-16 06:04:15.608363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.775 [2024-12-16 06:04:15.608369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.775 [2024-12-16 06:04:15.611035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.775 [2024-12-16 06:04:15.620685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.775 [2024-12-16 06:04:15.621098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.775 [2024-12-16 06:04:15.621113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:41.775 [2024-12-16 06:04:15.621121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:41.775 [2024-12-16 06:04:15.621287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:41.775 [2024-12-16 06:04:15.621454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.775 [2024-12-16 06:04:15.621461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.775 [2024-12-16 06:04:15.621467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.775 [2024-12-16 06:04:15.624168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.633599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.634006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.634023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.634030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.634202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.634381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.634388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.634394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.637044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 7453.00 IOPS, 29.11 MiB/s [2024-12-16T05:04:15.891Z] [2024-12-16 06:04:15.646405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.646748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.646765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.646772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.646945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.647112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.647120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.647126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.649716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.659162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.659571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.659616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.659639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.660238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.660634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.660641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.660648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.663199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.671868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.672239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.672254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.672261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.672419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.672577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.672584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.672590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.675183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.684718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.685143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.685188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.685211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.685637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.685804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.685811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.685817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.688414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.697502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.697921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.697937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.697945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.698112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.698278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.698286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.698295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.700948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.710517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.710960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.710977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.710985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.711152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.711320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.711327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.711334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.713959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.723350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.723695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.723712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.723719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.723891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.724059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.724067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.724073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.726661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.736180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.736607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.736623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.035 [2024-12-16 06:04:15.736630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.035 [2024-12-16 06:04:15.736796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.035 [2024-12-16 06:04:15.736969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.035 [2024-12-16 06:04:15.736977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.035 [2024-12-16 06:04:15.736983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.035 [2024-12-16 06:04:15.739574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.035 [2024-12-16 06:04:15.748926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.035 [2024-12-16 06:04:15.749319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.035 [2024-12-16 06:04:15.749340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.749347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.749504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.749662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.749669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.749675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.752275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.761628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.762024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.762040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.762047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.762205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.762363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.762370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.762376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.764991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.774375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.774793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.774810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.774817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.774989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.775157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.775164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.775170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.777823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.787108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.787509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.787524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.787531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.787688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.787856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.787864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.787870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.790525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.800001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.800421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.800437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.800444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.800611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.800777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.800784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.800791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.803391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.812746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.813158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.813174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.813182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.813348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.813515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.813522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.813528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.816142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.825531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.825928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.825944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.825952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.826118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.826285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.826292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.826298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.828925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.838283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.838680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.838725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.838748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.839201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.839374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.839382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.839388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.842171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.851345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.851751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.851768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.851776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.851954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.852126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.852135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.852141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.854870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.864356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.864758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.864774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.864781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.864971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.865143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.865150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.036 [2024-12-16 06:04:15.865157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.036 [2024-12-16 06:04:15.867848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.036 [2024-12-16 06:04:15.877099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.036 [2024-12-16 06:04:15.877505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.036 [2024-12-16 06:04:15.877549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.036 [2024-12-16 06:04:15.877579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.036 [2024-12-16 06:04:15.878049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.036 [2024-12-16 06:04:15.878221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.036 [2024-12-16 06:04:15.878228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.037 [2024-12-16 06:04:15.878234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.037 [2024-12-16 06:04:15.880833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.890151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.890547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.890591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.890614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.891063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.891244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.891251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.891257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.893929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.902914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.903333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.903349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.903357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.903528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.903700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.903708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.903714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.906405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.915897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.916274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.916290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.916298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.916468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.916639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.916650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.916657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.919350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.928823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.929178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.929222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.929245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.929822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.930292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.930300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.930306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.932902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.941639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.942091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.942138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.942162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.942742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.943271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.943279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.943286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.945887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.954445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.954842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.954862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.954869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.955052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.955218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.955225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.955231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.957868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.967317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.967723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.967766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.967789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.968303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.968470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.968477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.968483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.971162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.980216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.980634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.980649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.980657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.980823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.980994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.981002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.981008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.983601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.297 [2024-12-16 06:04:15.993097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.297 [2024-12-16 06:04:15.993516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.297 [2024-12-16 06:04:15.993532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.297 [2024-12-16 06:04:15.993539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.297 [2024-12-16 06:04:15.993705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.297 [2024-12-16 06:04:15.993878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.297 [2024-12-16 06:04:15.993886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.297 [2024-12-16 06:04:15.993892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.297 [2024-12-16 06:04:15.996463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.005900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.006325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.006340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.006348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.006517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.006684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.006691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.006697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.009242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.018696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.019140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.019156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.019164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.019330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.019497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.019504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.019510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.022125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.031512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.031928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.031944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.031951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.032118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.032284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.032291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.032297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.034924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.044281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.044650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.044666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.044672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.044830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.045014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.045022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.045032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.047624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.056985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.057376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.057391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.057398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.057564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.057730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.057737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.057744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.060346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.069833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.070251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.070267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.070275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.070432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.070590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.070597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.070602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.073215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.082544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.082958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.082974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.082981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.083148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.083314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.083322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.083328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.086008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.095253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.095676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.095692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.095699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.095872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.096061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.096070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.096078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.098888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.108303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.108729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.108773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.108796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.109389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.109609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.109616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.109623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.112326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.121163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.121561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.121577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.121584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.298 [2024-12-16 06:04:16.121750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.298 [2024-12-16 06:04:16.121940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.298 [2024-12-16 06:04:16.121948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.298 [2024-12-16 06:04:16.121954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.298 [2024-12-16 06:04:16.124653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.298 [2024-12-16 06:04:16.133940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.298 [2024-12-16 06:04:16.134252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.298 [2024-12-16 06:04:16.134267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.298 [2024-12-16 06:04:16.134274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.299 [2024-12-16 06:04:16.134431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.299 [2024-12-16 06:04:16.134598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.299 [2024-12-16 06:04:16.134605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.299 [2024-12-16 06:04:16.134611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.299 [2024-12-16 06:04:16.137225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.299 [2024-12-16 06:04:16.146710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.299 [2024-12-16 06:04:16.147173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.299 [2024-12-16 06:04:16.147189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.299 [2024-12-16 06:04:16.147197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.299 [2024-12-16 06:04:16.147369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.299 [2024-12-16 06:04:16.147540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.299 [2024-12-16 06:04:16.147547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.299 [2024-12-16 06:04:16.147554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.299 [2024-12-16 06:04:16.150297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.159593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.160013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.160029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.160036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.160194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.160351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.160358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.160365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.163017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.172411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.172813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.172869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.172894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.173472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.173874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.173882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.173889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.176482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.185111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.185535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.185579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.185602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.186194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.186787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.186794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.186800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.189391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.197953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.198376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.198423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.198447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.198934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.199106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.199113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.199120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.201729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.210746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.211179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.211196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.211203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.211369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.211536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.211544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.211550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.214153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.223661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.224001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.224017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.224027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.224194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.224360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.224368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.224374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.226993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.236496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.236933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.236979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.237002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.237580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.237808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.237816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.237822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.240463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.249592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.250019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.250036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.250043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.250215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.250386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.250394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.250400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.253152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.262527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.262963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.559 [2024-12-16 06:04:16.262980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.559 [2024-12-16 06:04:16.262988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.559 [2024-12-16 06:04:16.263156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.559 [2024-12-16 06:04:16.263314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.559 [2024-12-16 06:04:16.263324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.559 [2024-12-16 06:04:16.263330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.559 [2024-12-16 06:04:16.265989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.559 [2024-12-16 06:04:16.275470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.559 [2024-12-16 06:04:16.275900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.275917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.275924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.276096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.276253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.276261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.276266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.278914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.288352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.288774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.288817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.288839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.289430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.290030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.290042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.290052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.294488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.302091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.302507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.302552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.302575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.303104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.303287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.303295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.303301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.306216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.314913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.315240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.315256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.315263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.315429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.315596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.315604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.315610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.318281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.327779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.328150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.328167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.328174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.328340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.328507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.328515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.328521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.331119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.340619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.341019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.341035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.341042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.341209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.341376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.341383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.341390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.344016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.353454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.353923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.353941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.353948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.354133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.354300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.354308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.354314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.357057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.366471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.366841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.366902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.366925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.367506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.368097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.368122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.368144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.370917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.379402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.379750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.379766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.379774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.379951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.380133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.380140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.380146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.382804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.392244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.392670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.392686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.392693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.392865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.393033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.560 [2024-12-16 06:04:16.393040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.560 [2024-12-16 06:04:16.393052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.560 [2024-12-16 06:04:16.395642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.560 [2024-12-16 06:04:16.405049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.560 [2024-12-16 06:04:16.405516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.560 [2024-12-16 06:04:16.405559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.560 [2024-12-16 06:04:16.405583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.560 [2024-12-16 06:04:16.406150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.560 [2024-12-16 06:04:16.406323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.561 [2024-12-16 06:04:16.406332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.561 [2024-12-16 06:04:16.406338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.561 [2024-12-16 06:04:16.409001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.417974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.418382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.418397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.418405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.418576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.418747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.418754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.418761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.421462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.430888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.431190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.431206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.431213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.431379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.431546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.431553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.431559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.434243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.443733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.444121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.444137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.444144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.444310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.444477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.444484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.444491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.447180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.456666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.457034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.457051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.457059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.457225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.457392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.457399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.457405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.460080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.469474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.469908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.469953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.469976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.470480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.470637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.470644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.470650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.473346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.482307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.482733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.482777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.482800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.483329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.483499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.483507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.483513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.486184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.495247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.495598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.495614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.495622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.495788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.495961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.495969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.495975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.498618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.508124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.508488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.508504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.508511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.508677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.508844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.508859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.508865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.511505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.821 [2024-12-16 06:04:16.521028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.821 [2024-12-16 06:04:16.521309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.821 [2024-12-16 06:04:16.521325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.821 [2024-12-16 06:04:16.521332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.821 [2024-12-16 06:04:16.521499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.821 [2024-12-16 06:04:16.521666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.821 [2024-12-16 06:04:16.521674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.821 [2024-12-16 06:04:16.521680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.821 [2024-12-16 06:04:16.524327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.533966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.534385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.534401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.534408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.534574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.534741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.534749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.534755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.537433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.546895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.547183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.547199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.547206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.547372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.547539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.547546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.547552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.550223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.559872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.560249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.560264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.560272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.560442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.560614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.560622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.560628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.563325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.572714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.573158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.573207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.573237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.573815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.574030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.574038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.574045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.576680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.585573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.585947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.585964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.585971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.586138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.586304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.586312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.586318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.588917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.598514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.598866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.598882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.598889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.599055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.599222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.599229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.599235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.601905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.611491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.611841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.611862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.611870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.612041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.612214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.612225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.612232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.614971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.624601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.624974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.624991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.624999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.625181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.625364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.625372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.625378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.628297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.637755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.638195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.638212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.638219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.638391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.638562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.638569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.638576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.641317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 5962.40 IOPS, 23.29 MiB/s [2024-12-16T05:04:16.678Z] [2024-12-16 06:04:16.650789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.651230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.651269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.651295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.822 [2024-12-16 06:04:16.651843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.822 [2024-12-16 06:04:16.652020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.822 [2024-12-16 06:04:16.652028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.822 [2024-12-16 06:04:16.652034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.822 [2024-12-16 06:04:16.654738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:42.822 [2024-12-16 06:04:16.663513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.822 [2024-12-16 06:04:16.663897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.822 [2024-12-16 06:04:16.663944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:42.822 [2024-12-16 06:04:16.663966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:42.823 [2024-12-16 06:04:16.664495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:42.823 [2024-12-16 06:04:16.664770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:42.823 [2024-12-16 06:04:16.664783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:42.823 [2024-12-16 06:04:16.664792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.823 [2024-12-16 06:04:16.669229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.677067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.677510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.677527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.083 [2024-12-16 06:04:16.677535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.083 [2024-12-16 06:04:16.677717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.083 [2024-12-16 06:04:16.677905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.083 [2024-12-16 06:04:16.677915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.083 [2024-12-16 06:04:16.677922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.083 [2024-12-16 06:04:16.680831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.689843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.690290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.690306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.083 [2024-12-16 06:04:16.690313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.083 [2024-12-16 06:04:16.690479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.083 [2024-12-16 06:04:16.690646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.083 [2024-12-16 06:04:16.690653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.083 [2024-12-16 06:04:16.690659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.083 [2024-12-16 06:04:16.693264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.702621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.703069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.703116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.083 [2024-12-16 06:04:16.703146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.083 [2024-12-16 06:04:16.703725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.083 [2024-12-16 06:04:16.704147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.083 [2024-12-16 06:04:16.704155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.083 [2024-12-16 06:04:16.704161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.083 [2024-12-16 06:04:16.706861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.715405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.715858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.715876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.083 [2024-12-16 06:04:16.715884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.083 [2024-12-16 06:04:16.716050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.083 [2024-12-16 06:04:16.716220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.083 [2024-12-16 06:04:16.716227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.083 [2024-12-16 06:04:16.716233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.083 [2024-12-16 06:04:16.718806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.728166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.728596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.728643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.083 [2024-12-16 06:04:16.728666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.083 [2024-12-16 06:04:16.729184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.083 [2024-12-16 06:04:16.729352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.083 [2024-12-16 06:04:16.729359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.083 [2024-12-16 06:04:16.729365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.083 [2024-12-16 06:04:16.731957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.741006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.741467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.741513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.083 [2024-12-16 06:04:16.741535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.083 [2024-12-16 06:04:16.742128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.083 [2024-12-16 06:04:16.742371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.083 [2024-12-16 06:04:16.742378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.083 [2024-12-16 06:04:16.742387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.083 [2024-12-16 06:04:16.744990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.753737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.754199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.754244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.083 [2024-12-16 06:04:16.754267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.083 [2024-12-16 06:04:16.754844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.083 [2024-12-16 06:04:16.755364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.083 [2024-12-16 06:04:16.755376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.083 [2024-12-16 06:04:16.755386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.083 [2024-12-16 06:04:16.759817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.083 [2024-12-16 06:04:16.767662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.083 [2024-12-16 06:04:16.768110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.083 [2024-12-16 06:04:16.768127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.768135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.768317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.768499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.768507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.768513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.771427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.780599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.781027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.781043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.781050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.781208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.781366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.781373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.781379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.783991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.793432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.793888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.793932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.793956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.794342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.794509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.794516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.794522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.797135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.806246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.806642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.806657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.806664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.806821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.807007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.807015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.807021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.809690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.819031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.819420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.819435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.819442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.819599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.819756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.819763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.819769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.822380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.831776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.832193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.832209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.832216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.832385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.832552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.832559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.832565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.835176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.844532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.844977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.844992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.845000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.845170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.845328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.845335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.845341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.847960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.857347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.857787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.857840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.857876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.858368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.858535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.858542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.858548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.861142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.870172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.870569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.870613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.870637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.871229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.871623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.871632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.871639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.874394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.883182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.883551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.883567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.883575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.883746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.883923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.883933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.883939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.084 [2024-12-16 06:04:16.886609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.084 [2024-12-16 06:04:16.896017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.084 [2024-12-16 06:04:16.896381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.084 [2024-12-16 06:04:16.896397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.084 [2024-12-16 06:04:16.896404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.084 [2024-12-16 06:04:16.896570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.084 [2024-12-16 06:04:16.896737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.084 [2024-12-16 06:04:16.896744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.084 [2024-12-16 06:04:16.896750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.085 [2024-12-16 06:04:16.899346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.085 [2024-12-16 06:04:16.908788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.085 [2024-12-16 06:04:16.909160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.085 [2024-12-16 06:04:16.909176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.085 [2024-12-16 06:04:16.909183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.085 [2024-12-16 06:04:16.909350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.085 [2024-12-16 06:04:16.909517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.085 [2024-12-16 06:04:16.909524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.085 [2024-12-16 06:04:16.909530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.085 [2024-12-16 06:04:16.912143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.085 [2024-12-16 06:04:16.921529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.085 [2024-12-16 06:04:16.921880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.085 [2024-12-16 06:04:16.921899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.085 [2024-12-16 06:04:16.921906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.085 [2024-12-16 06:04:16.922063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.085 [2024-12-16 06:04:16.922221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.085 [2024-12-16 06:04:16.922228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.085 [2024-12-16 06:04:16.922234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.085 [2024-12-16 06:04:16.924891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.085 [2024-12-16 06:04:16.934432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.085 [2024-12-16 06:04:16.934863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.085 [2024-12-16 06:04:16.934879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.085 [2024-12-16 06:04:16.934887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.085 [2024-12-16 06:04:16.935058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.085 [2024-12-16 06:04:16.935229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.085 [2024-12-16 06:04:16.935237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.085 [2024-12-16 06:04:16.935243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.345 [2024-12-16 06:04:16.937989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.345 [2024-12-16 06:04:16.947295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.345 [2024-12-16 06:04:16.947722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.345 [2024-12-16 06:04:16.947771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.345 [2024-12-16 06:04:16.947794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.345 [2024-12-16 06:04:16.948385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.345 [2024-12-16 06:04:16.948583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.345 [2024-12-16 06:04:16.948590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.345 [2024-12-16 06:04:16.948596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.345 [2024-12-16 06:04:16.952772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.345 [2024-12-16 06:04:16.961035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.345 [2024-12-16 06:04:16.961488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.345 [2024-12-16 06:04:16.961504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.345 [2024-12-16 06:04:16.961512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.345 [2024-12-16 06:04:16.961694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.345 [2024-12-16 06:04:16.961886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.345 [2024-12-16 06:04:16.961895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.345 [2024-12-16 06:04:16.961902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.345 [2024-12-16 06:04:16.964809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.345 [2024-12-16 06:04:16.973839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.345 [2024-12-16 06:04:16.974258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.345 [2024-12-16 06:04:16.974274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.345 [2024-12-16 06:04:16.974282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.345 [2024-12-16 06:04:16.974448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.345 [2024-12-16 06:04:16.974614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.345 [2024-12-16 06:04:16.974622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.345 [2024-12-16 06:04:16.974628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.345 [2024-12-16 06:04:16.977234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.345 [2024-12-16 06:04:16.986637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.345 [2024-12-16 06:04:16.987076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.345 [2024-12-16 06:04:16.987093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.345 [2024-12-16 06:04:16.987101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.345 [2024-12-16 06:04:16.987267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.345 [2024-12-16 06:04:16.987434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.345 [2024-12-16 06:04:16.987442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.345 [2024-12-16 06:04:16.987448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.345 [2024-12-16 06:04:16.990071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.345 [2024-12-16 06:04:16.999419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.345 [2024-12-16 06:04:16.999872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.345 [2024-12-16 06:04:16.999916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.345 [2024-12-16 06:04:16.999939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.345 [2024-12-16 06:04:17.000516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.345 [2024-12-16 06:04:17.000719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.345 [2024-12-16 06:04:17.000726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.345 [2024-12-16 06:04:17.000733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.345 [2024-12-16 06:04:17.003339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.345 [2024-12-16 06:04:17.012227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.345 [2024-12-16 06:04:17.012597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.012614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.012621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.012787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.012961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.012969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.012976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.015569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.025152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.025509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.025524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.025532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.025698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.025871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.025879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.025885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.028477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.037971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.038326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.038341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.038348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.038514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.038681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.038689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.038695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.041304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.050812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.051254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.051308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.051344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.051931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.052098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.052106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.052112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.054701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.063575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.063938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.063954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.063962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.064128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.064295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.064303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.064309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.066903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.076397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.076763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.076779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.076786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.076957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.077124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.077132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.077138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.079800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.089435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.089874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.089890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.089897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.090070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.090228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.090235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.090244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.092858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.102344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.102761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.102776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.102784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.102959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.103127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.103135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.103141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.105729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.115095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.115532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.115582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.115606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.116159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.116326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.116333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.116340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.118933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.127833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.128173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.128190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.128197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.128356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.346 [2024-12-16 06:04:17.128514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.346 [2024-12-16 06:04:17.128521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.346 [2024-12-16 06:04:17.128528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.346 [2024-12-16 06:04:17.131256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.346 [2024-12-16 06:04:17.140904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.346 [2024-12-16 06:04:17.141258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.346 [2024-12-16 06:04:17.141274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.346 [2024-12-16 06:04:17.141282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.346 [2024-12-16 06:04:17.141453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.347 [2024-12-16 06:04:17.141626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.347 [2024-12-16 06:04:17.141634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.347 [2024-12-16 06:04:17.141640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.347 [2024-12-16 06:04:17.144354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.347 [2024-12-16 06:04:17.153700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.347 [2024-12-16 06:04:17.154142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.347 [2024-12-16 06:04:17.154158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.347 [2024-12-16 06:04:17.154165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.347 [2024-12-16 06:04:17.154331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.347 [2024-12-16 06:04:17.154498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.347 [2024-12-16 06:04:17.154505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.347 [2024-12-16 06:04:17.154512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.347 [2024-12-16 06:04:17.157125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.347 [2024-12-16 06:04:17.166432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.347 [2024-12-16 06:04:17.166895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.347 [2024-12-16 06:04:17.166940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.347 [2024-12-16 06:04:17.166964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.347 [2024-12-16 06:04:17.167542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.347 [2024-12-16 06:04:17.168139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.347 [2024-12-16 06:04:17.168165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.347 [2024-12-16 06:04:17.168190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.347 [2024-12-16 06:04:17.170780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.347 [2024-12-16 06:04:17.179163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.347 [2024-12-16 06:04:17.179566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.347 [2024-12-16 06:04:17.179582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.347 [2024-12-16 06:04:17.179589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.347 [2024-12-16 06:04:17.179758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.347 [2024-12-16 06:04:17.179947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.347 [2024-12-16 06:04:17.179956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.347 [2024-12-16 06:04:17.179962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.347 [2024-12-16 06:04:17.182570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.347 [2024-12-16 06:04:17.191983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.347 [2024-12-16 06:04:17.192398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.347 [2024-12-16 06:04:17.192414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.347 [2024-12-16 06:04:17.192421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.347 [2024-12-16 06:04:17.192578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.347 [2024-12-16 06:04:17.192736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.347 [2024-12-16 06:04:17.192742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.347 [2024-12-16 06:04:17.192748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.347 [2024-12-16 06:04:17.195450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.204817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.205256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.205272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.205279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.205450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.205622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.205630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.205636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.208284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.217606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.218021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.218037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.218044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.218201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.218358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.218365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.218374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.220989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.230381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.230832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.230888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.230912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.231358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.231524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.231531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.231538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.235696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.243999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.244468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.244512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.244535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.245113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.245296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.245303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.245310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.248230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.256750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.257200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.257216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.257224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.257390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.257556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.257564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.257570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.260186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.269583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.270026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.270045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.270052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.270210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.270367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.270374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.270380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.272994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.282386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.282804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.282840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.282880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.283458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.283674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.283682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.283688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.286326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 [2024-12-16 06:04:17.295181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.295598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.295614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.295621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.295787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.608 [2024-12-16 06:04:17.295958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.608 [2024-12-16 06:04:17.295966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.608 [2024-12-16 06:04:17.295973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.608 [2024-12-16 06:04:17.298561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3573092 Killed "${NVMF_APP[@]}" "$@" 00:35:43.608 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:43.608 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:43.608 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:43.608 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:43.608 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.608 [2024-12-16 06:04:17.308186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.608 [2024-12-16 06:04:17.308620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.608 [2024-12-16 06:04:17.308636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.608 [2024-12-16 06:04:17.308643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.608 [2024-12-16 06:04:17.308813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.308990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.308998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.309004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.311743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3574296 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3574296 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3574296 ']' 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:43.609 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.609 [2024-12-16 06:04:17.321286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.321723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.321740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.321747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.321922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.322094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.322102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.322108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.324849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.334227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.334636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.334651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.334658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.334829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.335009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.335018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.335024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.337760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.347305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.347753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.347770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.347777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.347954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.348126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.348133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.348140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.350890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.359209] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:43.609 [2024-12-16 06:04:17.359247] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.609 [2024-12-16 06:04:17.360378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.360816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.360832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.360840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.361017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.361190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.361198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.361205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.363944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.373601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.374036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.374053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.374061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.374233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.374404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.374415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.374422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.377167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.386568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.386975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.386993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.387000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.387172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.387345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.387352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.387360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.390113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.399659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.400114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.400130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.400138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.400310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.400481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.400488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.400495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.403242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.412634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.413052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.413069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.413077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.413244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.413411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.609 [2024-12-16 06:04:17.413418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.609 [2024-12-16 06:04:17.413425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.609 [2024-12-16 06:04:17.416154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.609 [2024-12-16 06:04:17.419755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:43.609 [2024-12-16 06:04:17.425557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.609 [2024-12-16 06:04:17.425993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.609 [2024-12-16 06:04:17.426011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.609 [2024-12-16 06:04:17.426019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.609 [2024-12-16 06:04:17.426191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.609 [2024-12-16 06:04:17.426363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.610 [2024-12-16 06:04:17.426370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.610 [2024-12-16 06:04:17.426377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.610 [2024-12-16 06:04:17.429074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.610 [2024-12-16 06:04:17.438498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.610 [2024-12-16 06:04:17.438940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-12-16 06:04:17.438960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.610 [2024-12-16 06:04:17.438968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.610 [2024-12-16 06:04:17.439151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.610 [2024-12-16 06:04:17.439324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.610 [2024-12-16 06:04:17.439332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.610 [2024-12-16 06:04:17.439339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.610 [2024-12-16 06:04:17.442035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.610 [2024-12-16 06:04:17.451488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.610 [2024-12-16 06:04:17.451936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.610 [2024-12-16 06:04:17.451956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.610 [2024-12-16 06:04:17.451964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.610 [2024-12-16 06:04:17.452146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.610 [2024-12-16 06:04:17.452314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.610 [2024-12-16 06:04:17.452321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.610 [2024-12-16 06:04:17.452328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.610 [2024-12-16 06:04:17.455039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.610 [2024-12-16 06:04:17.459256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:43.610 [2024-12-16 06:04:17.459285] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:43.610 [2024-12-16 06:04:17.459293] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:43.610 [2024-12-16 06:04:17.459303] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:43.610 [2024-12-16 06:04:17.459308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:43.610 [2024-12-16 06:04:17.459347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:43.610 [2024-12-16 06:04:17.459439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:43.610 [2024-12-16 06:04:17.459440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.917 [2024-12-16 06:04:17.464445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.917 [2024-12-16 06:04:17.464903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-12-16 06:04:17.464924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.917 [2024-12-16 06:04:17.464934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.917 [2024-12-16 06:04:17.465108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.917 [2024-12-16 06:04:17.465282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.917 [2024-12-16 06:04:17.465290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.917 [2024-12-16 06:04:17.465298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.917 [2024-12-16 06:04:17.468043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.917 [2024-12-16 06:04:17.477429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.917 [2024-12-16 06:04:17.477868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-12-16 06:04:17.477890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.917 [2024-12-16 06:04:17.477899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.917 [2024-12-16 06:04:17.478074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.917 [2024-12-16 06:04:17.478248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.917 [2024-12-16 06:04:17.478257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.917 [2024-12-16 06:04:17.478265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.917 [2024-12-16 06:04:17.481005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.917 [2024-12-16 06:04:17.490389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.917 [2024-12-16 06:04:17.490844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-12-16 06:04:17.490870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.917 [2024-12-16 06:04:17.490880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.917 [2024-12-16 06:04:17.491053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.917 [2024-12-16 06:04:17.491228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.917 [2024-12-16 06:04:17.491236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.917 [2024-12-16 06:04:17.491244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.917 [2024-12-16 06:04:17.493989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.917 [2024-12-16 06:04:17.503376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.917 [2024-12-16 06:04:17.503835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.917 [2024-12-16 06:04:17.503863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.503873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.504049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.504222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.504230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.504238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 [2024-12-16 06:04:17.506980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 [2024-12-16 06:04:17.516377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.516815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.516835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.516844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.517025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.517198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.517206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.517214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 [2024-12-16 06:04:17.519960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 [2024-12-16 06:04:17.529373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.529815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.529832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.529840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.530042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.530230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.530237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.530244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 [2024-12-16 06:04:17.533060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 [2024-12-16 06:04:17.542452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.542893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.542910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.542922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.543094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.543265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.543272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.543279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:43.918 [2024-12-16 06:04:17.546025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.918 [2024-12-16 06:04:17.555407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.555864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.555882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.555889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.556061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.556233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.556241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.556248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 [2024-12-16 06:04:17.558989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 [2024-12-16 06:04:17.568377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.568747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.568764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.568773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.568952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.569125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.569132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.569139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 [2024-12-16 06:04:17.571879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 [2024-12-16 06:04:17.581441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.581770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.581786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.581797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.581974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.582146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.582154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.582160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.918 [2024-12-16 06:04:17.584905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 [2024-12-16 06:04:17.588832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.918 [2024-12-16 06:04:17.594448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.594786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.594802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.594809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.594985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.595158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.595165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.595171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 [2024-12-16 06:04:17.597910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.918 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.918 [2024-12-16 06:04:17.607449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.607808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.607824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.607831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.608010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.918 [2024-12-16 06:04:17.608181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.918 [2024-12-16 06:04:17.608188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.918 [2024-12-16 06:04:17.608195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.918 [2024-12-16 06:04:17.610938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.918 [2024-12-16 06:04:17.620488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.918 [2024-12-16 06:04:17.620932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.918 [2024-12-16 06:04:17.620949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.918 [2024-12-16 06:04:17.620958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.918 [2024-12-16 06:04:17.621131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.919 [2024-12-16 06:04:17.621305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.919 [2024-12-16 06:04:17.621313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.919 [2024-12-16 06:04:17.621320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.919 [2024-12-16 06:04:17.624072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.919 Malloc0 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.919 [2024-12-16 06:04:17.633455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.919 [2024-12-16 06:04:17.633867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-12-16 06:04:17.633884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.919 [2024-12-16 06:04:17.633891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.919 [2024-12-16 06:04:17.634063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.919 [2024-12-16 06:04:17.634235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.919 [2024-12-16 06:04:17.634244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.919 [2024-12-16 06:04:17.634252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.919 [2024-12-16 06:04:17.636997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.919 4968.67 IOPS, 19.41 MiB/s [2024-12-16T05:04:17.775Z] [2024-12-16 06:04:17.647827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.919 [2024-12-16 06:04:17.648221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:43.919 [2024-12-16 06:04:17.648241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aeb50 with addr=10.0.0.2, port=4420 00:35:43.919 [2024-12-16 06:04:17.648249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aeb50 is same with the state(6) to be set 00:35:43.919 [2024-12-16 06:04:17.648419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aeb50 (9): Bad file descriptor 00:35:43.919 [2024-12-16 06:04:17.648592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:43.919 [2024-12-16 06:04:17.648600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:43.919 [2024-12-16 06:04:17.648607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:43.919 [2024-12-16 06:04:17.648904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.919 [2024-12-16 06:04:17.651356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.919 06:04:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3573390 00:35:43.919 [2024-12-16 06:04:17.660902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:43.919 [2024-12-16 06:04:17.691665] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:45.852 5808.00 IOPS, 22.69 MiB/s [2024-12-16T05:04:21.085Z] 6526.12 IOPS, 25.49 MiB/s [2024-12-16T05:04:22.020Z] 7033.56 IOPS, 27.47 MiB/s [2024-12-16T05:04:22.956Z] 7466.70 IOPS, 29.17 MiB/s [2024-12-16T05:04:23.892Z] 7820.91 IOPS, 30.55 MiB/s [2024-12-16T05:04:24.829Z] 8087.00 IOPS, 31.59 MiB/s [2024-12-16T05:04:25.764Z] 8342.54 IOPS, 32.59 MiB/s [2024-12-16T05:04:26.701Z] 8538.93 IOPS, 33.36 MiB/s 00:35:52.845 Latency(us) 00:35:52.845 [2024-12-16T05:04:26.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.845 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:52.845 Verification LBA range: start 0x0 length 0x4000 00:35:52.845 Nvme1n1 : 15.00 8707.88 34.02 10955.34 0.00 6489.74 425.20 14605.17 00:35:52.845 [2024-12-16T05:04:26.701Z] =================================================================================================================== 00:35:52.845 [2024-12-16T05:04:26.701Z] Total : 8707.88 34.02 10955.34 0.00 6489.74 425.20 14605.17 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.104 rmmod nvme_tcp 00:35:53.104 rmmod nvme_fabrics 00:35:53.104 rmmod nvme_keyring 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 3574296 ']' 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 3574296 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3574296 ']' 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3574296 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:53.104 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3574296 00:35:53.363 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:53.363 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:53.363 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3574296' 00:35:53.363 killing process with pid 3574296 00:35:53.364 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3574296 00:35:53.364 06:04:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3574296 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.364 06:04:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.899 00:35:55.899 real 0m25.445s 00:35:55.899 user 1m0.274s 00:35:55.899 sys 0m6.303s 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:55.899 ************************************ 00:35:55.899 END TEST nvmf_bdevperf 00:35:55.899 ************************************ 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.899 ************************************ 00:35:55.899 START TEST nvmf_target_disconnect 00:35:55.899 ************************************ 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:55.899 * Looking for test storage... 00:35:55.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.899 --rc genhtml_branch_coverage=1 00:35:55.899 --rc genhtml_function_coverage=1 00:35:55.899 --rc genhtml_legend=1 00:35:55.899 --rc geninfo_all_blocks=1 00:35:55.899 --rc geninfo_unexecuted_blocks=1 00:35:55.899 00:35:55.899 ' 00:35:55.899 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:55.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.900 --rc genhtml_branch_coverage=1 00:35:55.900 --rc genhtml_function_coverage=1 00:35:55.900 --rc genhtml_legend=1 00:35:55.900 --rc geninfo_all_blocks=1 00:35:55.900 --rc geninfo_unexecuted_blocks=1 00:35:55.900 00:35:55.900 ' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:55.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.900 --rc genhtml_branch_coverage=1 00:35:55.900 --rc genhtml_function_coverage=1 00:35:55.900 --rc genhtml_legend=1 00:35:55.900 --rc geninfo_all_blocks=1 00:35:55.900 --rc geninfo_unexecuted_blocks=1 00:35:55.900 00:35:55.900 ' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:55.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.900 --rc genhtml_branch_coverage=1 00:35:55.900 --rc genhtml_function_coverage=1 00:35:55.900 --rc genhtml_legend=1 00:35:55.900 --rc geninfo_all_blocks=1 00:35:55.900 --rc geninfo_unexecuted_blocks=1 00:35:55.900 00:35:55.900 ' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:55.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.900 06:04:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:01.174 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:01.174 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:01.174 Found net devices under 0000:af:00.0: cvl_0_0 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:01.174 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:01.175 Found net devices under 0000:af:00.1: cvl_0_1 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:36:01.175 00:36:01.175 --- 10.0.0.2 ping statistics --- 00:36:01.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.175 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:36:01.175 00:36:01.175 --- 10.0.0.1 ping statistics --- 00:36:01.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.175 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:01.175 06:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:01.175 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:01.175 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:01.175 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:01.175 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:01.444 ************************************ 00:36:01.444 START TEST nvmf_target_disconnect_tc1 00:36:01.444 ************************************ 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:01.444 [2024-12-16 06:04:35.155335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.444 [2024-12-16 06:04:35.155388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3fd10 with addr=10.0.0.2, port=4420 00:36:01.444 [2024-12-16 06:04:35.155407] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:01.444 [2024-12-16 06:04:35.155417] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:01.444 [2024-12-16 06:04:35.155424] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:01.444 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:01.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:01.444 Initializing NVMe Controllers 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:01.444 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:01.444 00:36:01.444 real 0m0.107s 00:36:01.444 user 0m0.043s 00:36:01.444 sys 0m0.063s 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:01.445 ************************************ 00:36:01.445 END TEST nvmf_target_disconnect_tc1 00:36:01.445 ************************************ 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:01.445 ************************************ 00:36:01.445 START TEST nvmf_target_disconnect_tc2 00:36:01.445 ************************************ 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3579357 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3579357 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3579357 ']' 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:01.445 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.445 [2024-12-16 06:04:35.294218] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:01.445 [2024-12-16 06:04:35.294263] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.704 [2024-12-16 06:04:35.368237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.704 [2024-12-16 06:04:35.408403] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.704 [2024-12-16 06:04:35.408443] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.704 [2024-12-16 06:04:35.408452] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.704 [2024-12-16 06:04:35.408459] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.704 [2024-12-16 06:04:35.408465] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.704 [2024-12-16 06:04:35.408593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:01.704 [2024-12-16 06:04:35.408795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:01.704 [2024-12-16 06:04:35.408702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:01.704 [2024-12-16 06:04:35.408797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.704 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.963 Malloc0 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.963 [2024-12-16 06:04:35.579368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.963 [2024-12-16 06:04:35.611635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3579385 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:01.963 06:04:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:03.874 06:04:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3579357 00:36:03.874 06:04:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Write completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Write completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Write completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Write completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Write completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Write completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.874 Read completed with error (sct=0, sc=8) 00:36:03.874 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 [2024-12-16 06:04:37.646764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 [2024-12-16 06:04:37.646974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 [2024-12-16 06:04:37.647162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Write completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 Read completed with error (sct=0, sc=8) 00:36:03.875 starting I/O failed 00:36:03.875 [2024-12-16 06:04:37.647355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:03.876 [2024-12-16 06:04:37.647546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.647579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.647701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.647720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.647901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.647913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.648025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.648036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.648142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.648153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.648307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.648320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.648404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.648416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.648612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.648624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.648715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.648726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.648799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.648810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.649018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.649031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.649169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.649181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.649282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.649295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.649390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.649401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.649575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.649588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.649657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.649668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.649871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.649905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.650153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.650186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.650314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.650347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.650468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.650501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.650735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.650770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.650957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.650991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.651219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.651232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.651333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.651346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.651519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.651552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.651674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.651707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.652009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.652044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.652158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.652171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.652411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.652444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.652704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.652738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.652877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.652913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.653063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.653075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.653243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.653256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.653348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.653360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.653571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.653605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.653829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.653870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.654008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.654041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.654228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.876 [2024-12-16 06:04:37.654261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.876 qpair failed and we were unable to recover it. 00:36:03.876 [2024-12-16 06:04:37.654395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.654429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.654690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.654723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.654933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.654969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.655152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.655184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.655430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.655463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.655609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.655643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.655825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.655866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.656012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.656045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.656236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.656269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.656496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.656529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.656795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.656830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.657079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.657096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.657197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.657213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.657382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.657414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.657686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.657719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.657968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.657986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.658198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.658231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.658439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.658473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.658741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.658774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.658972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.659007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.659144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.659178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.659444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.659476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.659670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.659704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.659922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.659957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.660204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.660222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.660307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.660322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.660419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.660437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.660604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.660622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.660784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.660817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.661056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.661129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.661399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.661436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.661627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.661662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.661868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.661904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.662038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.662056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.662168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.662186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.662393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.662415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.662604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.662623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.877 qpair failed and we were unable to recover it. 00:36:03.877 [2024-12-16 06:04:37.662834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.877 [2024-12-16 06:04:37.662881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.663028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.663061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.663279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.663313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.663571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.663605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.663782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.663815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.664069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.664144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.664324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.664388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.664720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.664755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.664903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.664938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.665132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.665164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.665347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.665381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.665692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.665725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.665928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.665963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.666200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.666233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.666380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.666413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.666682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.666715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.666909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.666944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.667091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.667123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.667317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.667350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.667674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.667707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.667979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.668014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.668143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.668160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.668329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.668347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.668583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.668615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.668868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.668902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.669046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.669079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.669228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.669246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.669486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.669504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.669656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.669672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.669815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.669827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.670023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.670036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.670196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.670230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.670424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.670458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.670645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.670678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.670873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.670908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.671095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.671128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.671272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.671305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.878 qpair failed and we were unable to recover it. 00:36:03.878 [2024-12-16 06:04:37.671526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.878 [2024-12-16 06:04:37.671560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.671751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.671792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.672035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.672048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.672169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.672203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.672340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.672374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.672615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.672649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.672827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.672840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.672992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.673005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.673134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.673162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.673282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.673316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.673621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.673655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.673790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.673803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.673896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.673908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.674013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.674047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.674157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.674190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.674375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.674409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.674621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.674655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.674906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.674941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.675083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.675123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.675264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.675276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.675421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.675454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.675663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.675697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.675887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.675921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.676100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.676112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.676272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.676306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.676542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.676576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.676717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.676751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.676942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.676977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.677185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.879 [2024-12-16 06:04:37.677219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.879 qpair failed and we were unable to recover it. 00:36:03.879 [2024-12-16 06:04:37.677491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.677525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.677795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.677827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.678030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.678065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.678192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.678205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.678294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.678305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.678523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.678557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.678822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.678873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.678995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.679008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.679182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.679216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.679354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.679389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.679575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.679609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.679741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.679775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.679969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.680009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.680127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.680140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.680314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.680348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.680669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.680702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.680828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.680869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.681063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.681076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.681266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.681300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.681422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.681456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.681675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.681708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.681906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.681919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.682110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.682144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.682286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.682319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.682461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.682495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.682764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.682806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.683021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.683035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.683189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.683223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.683422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.683455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.683729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.683763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.683959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.683972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.684075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.684086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.684177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.684187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.684348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.684360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.684467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.684478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.880 [2024-12-16 06:04:37.684653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.880 [2024-12-16 06:04:37.684687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.880 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.684824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.684867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.685005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.685039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.685255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.685289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.685595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.685670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.685882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.685904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.686143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.686177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.686371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.686404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.686665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.686700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.686976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.687012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.687154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.687188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.687364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.687382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.687615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.687649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.687876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.687912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.688061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.688095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.688274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.688308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.688578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.688612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.688875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.688910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.689117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.689151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.689408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.689426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.689572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.689590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.689755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.689773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.690039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.690059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.690250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.690283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.690471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.690505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.690807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.690855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.691067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.691085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.691182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.691200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.691376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.691411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.691700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.691733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.691913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.691952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.692065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.692086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.692202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.692238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.692486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.692519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.692788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.692832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.693018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.693037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.693201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.693235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.693460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.693495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.693688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.693722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.693904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.881 [2024-12-16 06:04:37.693922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.881 qpair failed and we were unable to recover it. 00:36:03.881 [2024-12-16 06:04:37.694088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.694122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.694325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.694359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.694561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.694596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.694867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.694914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.695153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.695171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.695283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.695302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.695505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.695523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.695670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.695687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.695856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.695876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.696113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.696147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.696296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.696329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.696530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.696564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.696769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.696803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.696943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.696977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.697115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.697148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.697405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.697439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.697621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.697654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.697854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.697890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.698095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.698117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.698333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.698366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.698562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.698596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.698872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.698908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.699160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.699194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.699492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.699526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.699701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.699735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.699932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.699966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.700120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.700161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.700397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.700415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.700660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.700697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.700889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.700924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.701218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.701261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.701369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.701386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.701645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.701681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.701815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.701833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.702031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.702049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.702162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.702180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.702442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.702475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.702770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.702804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.702949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.702984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.882 [2024-12-16 06:04:37.703167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.882 [2024-12-16 06:04:37.703185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.882 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.703347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.703365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.703630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.703664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.703865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.703884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.703973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.703990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.704089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.704106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.704309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.704343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.704570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.704604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.704797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.704832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.704972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.705006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.705199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.705233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.705427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.705460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.705734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.705773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.705867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.705884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.706135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.706169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.706365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.706398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.706649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.706683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.706820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.706864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.707058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.707076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.707291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.707329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.707597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.707637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.707834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.707888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.708045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.708065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.708212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.708231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.708408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.708426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.708682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.708718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.708902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.708938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.709180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.709199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.709348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.709365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.709533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.709568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.709859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.709896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.710105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.710139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.710324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.710343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.710447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.710466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.710854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.710877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.711037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.711055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.711249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.711284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.711553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.883 [2024-12-16 06:04:37.711589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.883 qpair failed and we were unable to recover it. 00:36:03.883 [2024-12-16 06:04:37.711813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.711873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.712157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.712191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.712437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.712471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.712696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.712729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.712869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.712888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.713058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.713091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.713337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.713372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.713571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.713605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.713864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.713899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.714049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.714073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.714249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.714283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.714569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.714604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.714865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.714901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.715096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.715114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.715206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.715222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.715439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.715457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.715604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.715622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.715808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.715843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.716084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.716118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.716257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.716291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.716540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.716559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.716774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.716809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.717016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.717035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.717195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.717229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.717416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.717450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.717629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.717663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.717878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.717897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.718126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.718161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.718353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.718388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.718685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.718720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.718984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.719019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.719287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.884 [2024-12-16 06:04:37.719322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.884 qpair failed and we were unable to recover it. 00:36:03.884 [2024-12-16 06:04:37.719445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.719464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.719717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.719735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.719911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.719930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.720031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.720047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.720309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.720344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.720665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.720700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.720897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.720933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.721227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.721261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.721450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.721469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.721642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.721661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.721851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.721871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.722058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.722077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.722256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.722291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.722481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.722516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.722700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.722734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.722960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.722996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.723207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.723242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:03.885 [2024-12-16 06:04:37.723532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.885 [2024-12-16 06:04:37.723550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:03.885 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.723762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.723785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.723968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.723987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.724141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.724159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.724331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.724350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.724657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.724675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.724917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.724936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.725047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.725066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.725178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.163 [2024-12-16 06:04:37.725196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.163 qpair failed and we were unable to recover it. 00:36:04.163 [2024-12-16 06:04:37.725361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.725380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.725616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.725635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.725826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.725845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.726010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.726029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.726200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.726235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.726507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.726541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.726741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.726775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.726980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.727016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.727197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.727230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.727414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.727434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.727696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.727731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.727982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.728017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.728149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.728183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.728390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.728424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.728705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.728740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.728992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.729011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.729174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.729192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.729310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.729349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.729661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.729696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.729896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.729932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.730125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.730161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.730431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.730466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.730603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.730655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.730904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.730940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.731133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.731168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.731362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.731380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.731578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.731613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.731922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.731959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.732155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.732189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.732378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.732412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.732631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.732666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.732892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.732928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.733202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.733235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.164 [2024-12-16 06:04:37.733400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.164 [2024-12-16 06:04:37.733435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.164 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.733613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.733650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.733869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.733904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.734110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.734144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.734347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.734361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.734538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.734551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.734778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.734812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.735020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.735055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.735190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.735223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.735470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.735483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.735637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.735650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.735866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.735900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.736150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.736184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.736401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.736446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.736644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.736677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.736871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.736906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.737035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.737048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.737141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.737151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.737248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.737261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.737487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.737501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.737700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.737713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.737869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.737882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.738119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.738132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.738281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.738315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.738596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.738629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.738885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.738919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.739053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.739086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.739363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.739398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.739648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.739683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.739946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.739981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.740132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.740166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.740373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.740407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.740684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.740718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.740975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.741011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.741283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.741317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.741606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.741640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.741917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.165 [2024-12-16 06:04:37.741953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.165 qpair failed and we were unable to recover it. 00:36:04.165 [2024-12-16 06:04:37.742097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.742131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.742379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.742413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.742607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.742641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.742855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.742897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.743141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.743175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.743316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.743329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.743498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.743511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.743663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.743697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.743923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.743959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.744155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.744188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.744429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.744441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.744667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.744680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.744828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.744842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.745022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.745056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.745247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.745280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.745417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.745451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.745647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.745681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.745939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.745975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.746254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.746288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.746556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.746589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.746787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.746821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.747034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.747068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.747340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.747374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.747653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.747666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.747946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.747993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.748102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.748114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.748320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.748354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.748554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.748588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.748786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.748821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.748980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.749014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.749214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.749249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.749542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.166 [2024-12-16 06:04:37.749577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.166 qpair failed and we were unable to recover it. 00:36:04.166 [2024-12-16 06:04:37.749802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.749835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.750038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.750073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.750210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.750245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.750610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.750645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.750918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.750945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.751090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.751104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.751223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.751256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.751403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.751436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.751616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.751650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.751866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.751901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.752041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.752075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.752323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.752361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.752583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.752617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.752889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.752925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.753127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.753161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.753364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.753398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.753673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.753713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.753978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.754013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.754155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.754189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.754293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.754306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.754502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.754535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.754729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.754763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.754956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.754992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.755220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.755232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.755385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.755398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.755591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.755625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.755857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.755892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.756166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.756200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.756334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.756367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.756672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.756706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.756897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.756933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.757068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.757101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.757228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.757262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.757513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.757526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.757593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.757604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.757807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.757820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.757971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.757984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.758093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.758127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.167 [2024-12-16 06:04:37.758328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.167 [2024-12-16 06:04:37.758362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.167 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.758701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.758735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.758953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.758989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.759146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.759178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.759321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.759334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.759489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.759522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.759705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.759739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.759938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.759973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.760214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.760226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.760380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.760413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.760691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.760724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.761011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.761047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.761233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.761266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.761521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.761561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.761751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.761784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.762005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.762033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.762130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.762141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.762312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.762345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.762563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.762595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.762791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.762824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.762967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.763001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.763155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.763189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.763375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.763408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.763703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.763737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.763943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.763979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.764176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.764210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.764342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.764354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.764533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.764567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.764747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.764779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.765054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.765089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.765282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.765294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.765392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.765426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.765721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.168 [2024-12-16 06:04:37.765754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.168 qpair failed and we were unable to recover it. 00:36:04.168 [2024-12-16 06:04:37.765890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.765925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.766057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.766089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.766235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.766268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.766541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.766575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.766717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.766749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.766973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.767015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.767174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.767186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.767301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.767335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.767601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.767635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.767814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.767859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.767982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.768015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.768158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.768191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.768393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.768441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.768540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.768553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.768786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.768819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.769065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.769100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.769306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.769345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.769445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.769457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.769631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.769643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.769737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.769748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.769986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.770004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.770163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.770197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.770332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.770365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.770618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.770652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.770899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.770934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.771053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.771065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.771210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.771252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.771453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.771487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.771664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.771697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.771975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.772010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.772142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.772154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.772257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.772270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.772499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.772512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.772644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.772656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.772744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.772754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.772980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.772994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.773194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.773206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.773296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.773306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.773454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.169 [2024-12-16 06:04:37.773498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.169 qpair failed and we were unable to recover it. 00:36:04.169 [2024-12-16 06:04:37.773707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.773741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.773931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.773966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.774224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.774236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.774394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.774428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.774696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.774729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.774957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.774992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.775173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.775185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.775353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.775386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.775680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.775714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.775845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.775887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.776087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.776120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.776344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.776378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.776654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.776687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.776809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.776841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.777012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.777025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.777185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.777198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.777301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.777314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.777493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.777527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.777797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.777830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.778033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.778068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.778314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.778346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.778630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.778670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.778945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.778981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.779256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.779290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.779484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.779517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.779722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.779756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.779900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.779935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.780198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.780232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.780430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.780463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.780708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.780741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.781020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.781055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.781267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.781300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.781578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.781591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.781841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.781883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.782023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.782056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.782255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.782289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.782430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.782463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.782667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.782680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.782892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.170 [2024-12-16 06:04:37.782928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.170 qpair failed and we were unable to recover it. 00:36:04.170 [2024-12-16 06:04:37.783123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.783157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.783288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.783322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.783471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.783484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.783693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.783727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.783937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.783972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.784229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.784262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.784462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.784495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.784673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.784707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.784949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.784984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.785203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.785217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.785328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.785361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.785613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.785647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.785783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.785817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.786040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.786074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.786345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.786358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.786519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.786554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.786685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.786718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.786951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.786986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.787131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.787144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.787364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.787397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.787591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.787624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.787761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.787795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.787929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.787969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.788150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.788184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.788360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.788394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.788536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.788549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.788779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.788812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.789022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.789058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.789210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.789244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.789362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.789373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.789511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.789523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.789715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.789747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.789971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.790006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.790204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.790217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.790285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.790297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.790412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.790444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.790703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.790736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.790919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.790955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.791227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.171 [2024-12-16 06:04:37.791261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.171 qpair failed and we were unable to recover it. 00:36:04.171 [2024-12-16 06:04:37.791455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.791489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.791705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.791738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.791957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.791992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.792194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.792229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.792476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.792488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.792633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.792646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.792810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.792843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.793081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.793115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.793308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.793343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.793572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.793605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.793808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.793843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.794000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.794035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.794334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.794368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.794574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.794608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.794733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.794767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.794977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.795012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.795264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.795298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.795509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.795544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.795728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.795761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.796004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.796039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.796191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.796203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.796295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.796306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.796462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.796474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.796633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.796672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.796891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.796926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.797038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.797071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.797275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.797289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.797457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.797489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.797682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.797716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.797909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.797944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.798145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.798178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.798395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.798428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.798612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.798646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.798920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.798956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.799170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.799204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.799355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.799389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.799537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.799570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.172 [2024-12-16 06:04:37.799763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.172 [2024-12-16 06:04:37.799797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.172 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.800042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.800079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.800234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.800274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.800387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.800400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.800491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.800502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.800591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.800602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.800867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.800902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.801039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.801073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.801291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.801326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.801508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.801521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.801780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.801813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.802018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.802052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.802195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.802229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.802493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.802527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.802727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.802761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.802959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.802995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.803247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.803260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.803362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.803375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.803565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.803598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.803801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.803834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.804031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.804066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.804339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.804366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.804480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.804503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.804657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.804670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.804808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.804820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.804990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.805014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.805203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.805244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.805549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.805583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.805781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.805794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.806005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.806018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.806247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.806261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.806419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.806432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.806636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.806648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.806862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.173 [2024-12-16 06:04:37.806875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.173 qpair failed and we were unable to recover it. 00:36:04.173 [2024-12-16 06:04:37.807015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.807028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.807198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.807232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.807429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.807463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.807734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.807768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.807966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.808001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.808198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.808211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.808305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.808316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.808426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.808460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.808639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.808672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.808979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.809014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.809212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.809245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.809496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.809530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.809723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.809757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.810007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.810042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.810246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.810258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.810478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.810511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.810706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.810740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.810895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.810930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.811080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.811114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.811375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.811452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.811760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.811797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.811976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.812013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.812152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.812170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.812347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.812381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.812589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.812623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.812883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.812920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.813115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.813148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.813442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.813486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.813746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.813780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.814075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.814110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.814366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.814400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.814617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.814651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.814914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.814959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.815208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.815243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.815432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.815465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.815669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.815687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.815858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.815893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.816046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.816079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.816302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.174 [2024-12-16 06:04:37.816336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.174 qpair failed and we were unable to recover it. 00:36:04.174 [2024-12-16 06:04:37.816546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.816564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.816718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.816752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.816968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.817003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.817153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.817186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.817516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.817554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.817805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.817838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.818099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.818133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.818292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.818327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.818661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.818695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.818888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.818923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.819111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.819145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.819344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.819378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.819657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.819691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.819972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.820008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.820231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.820265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.820522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.820555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.820777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.820811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.821025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.821060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.821329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.821375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.821491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.821503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.821675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.821711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.821996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.822030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.822281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.822315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.822526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.822559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.822824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.822867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.823065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.823099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.823371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.823412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.823530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.823543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.823752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.823785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.823996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.824032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.824247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.824281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.824562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.824595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.824842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.824896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.825105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.825144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.825327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.825361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.825632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.825666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.825870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.825905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.826107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.826140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.826346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.175 [2024-12-16 06:04:37.826380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.175 qpair failed and we were unable to recover it. 00:36:04.175 [2024-12-16 06:04:37.826594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.826607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.826709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.826720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.826872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.826908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.827106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.827139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.827290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.827325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.827546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.827586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.827809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.827823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.827998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.828011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.828195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.828230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.828363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.828396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.828615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.828650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.828785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.828819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.829072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.829107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.829262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.829274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.829521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.829554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.829790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.829824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.830084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.830118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.830419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.830453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.830694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.830706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.830853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.830867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.831025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.831058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.831372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.831406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.831601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.831615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.831806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.831840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.832127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.832161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.832305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.832340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.832527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.832540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.832771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.832784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.832966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.832979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.833096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.833109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.833199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.833210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.833321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.833335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.833481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.833494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.833638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.833672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.833878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.833919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.834118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.834153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.834393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.834406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.834568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.834581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.834822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.176 [2024-12-16 06:04:37.834835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.176 qpair failed and we were unable to recover it. 00:36:04.176 [2024-12-16 06:04:37.834993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.835007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.835218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.835251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.835588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.835621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.835883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.835919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.836129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.836162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.836417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.836451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.836702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.836737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.836932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.836967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.837122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.837155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.837374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.837409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.837633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.837666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.837803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.837836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.838053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.838087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.838285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.838318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.838538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.838571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.838767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.838801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.838942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.838977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.839204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.839238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.839421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.839455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.839712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.839746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.839952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.839988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.840175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.840209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.840353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.840386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.840663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.840697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.840916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.840951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.841143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.841176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.841366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.841379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.841598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.841632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.841885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.841921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.842148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.842183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.842391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.842425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.842606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.842619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.842771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.842799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.177 qpair failed and we were unable to recover it. 00:36:04.177 [2024-12-16 06:04:37.842992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.177 [2024-12-16 06:04:37.843027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.843159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.843192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.843489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.843506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.843602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.843613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.843708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.843719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.843870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.843883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.844055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.844088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.844283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.844318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.844452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.844487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.844661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.844674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.844930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.844943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.845111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.845144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.845358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.845392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.845615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.845649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.845861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.845896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.846159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.846192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.846349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.846362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.846541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.846554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.846722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.846735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.846824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.846836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.847029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.847042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.847207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.847220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.847317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.847328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.847560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.847574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.847725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.847738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.847973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.847987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.848162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.848196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.848384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.848398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.848581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.848615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.848811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.848856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.849086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.849121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.849268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.849302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.849575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.849609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.849826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.849870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.850095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.850129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.850279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.850292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.850454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.850466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.850616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.850630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.850795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.850808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.178 qpair failed and we were unable to recover it. 00:36:04.178 [2024-12-16 06:04:37.850972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.178 [2024-12-16 06:04:37.851007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.851127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.851161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.851367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.851401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.851592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.851608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.851803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.851838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.852003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.852046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.852247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.852282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.852583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.852596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.852752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.852765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.852974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.853010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.853216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.853250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.853407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.853445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.853543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.853556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.853646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.853658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.853877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.853891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.853985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.853997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.854138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.854150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.854238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.854250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.854348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.854360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.854557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.854591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.854828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.854873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.855058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.855094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.855321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.855354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.855668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.855683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.855837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.855854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.856032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.856045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.856158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.856172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.856332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.856346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.856655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.856689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.856992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.857027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.857237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.857276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.857487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.857521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.857790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.857804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.857984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.857998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.858162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.858195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.858398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.858431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.858727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.858762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.179 qpair failed and we were unable to recover it. 00:36:04.179 [2024-12-16 06:04:37.858980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.179 [2024-12-16 06:04:37.859015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.859159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.859193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.859402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.859436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.859646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.859681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.859987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.860023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.860180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.860213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.860412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.860446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.860715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.860729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.860819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.860831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.861012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.861047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.861231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.861265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.861414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.861448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.861698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.861711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.861954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.861968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.862118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.862131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.862366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.862400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.862678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.862711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.862901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.862936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.863145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.863179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.863387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.863420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.863663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.863700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.863955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.863991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.864138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.864171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.864375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.864389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.864499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.864513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.864711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.864724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.864881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.864895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.865054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.865067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.865231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.865265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.865405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.865439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.865716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.865753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.865930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.865944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.866118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.866152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.866302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.866317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.866458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.180 [2024-12-16 06:04:37.866472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.180 qpair failed and we were unable to recover it. 00:36:04.180 [2024-12-16 06:04:37.866611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.866624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.866729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.866740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.866948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.866983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.867251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.867286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.867515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.867528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.867700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.867735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.867997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.868032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.868244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.868279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.868494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.868528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.868778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.868792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.868960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.868974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.869151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.869185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.869331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.869366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.869602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.869636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.869898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.869911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.870086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.870122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.870275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.870310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.870635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.870670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.870868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.870904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.871142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.871175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.871329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.871363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.871514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.871527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.871764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.871778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.871989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.872003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.872113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.872127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.872311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.872346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.872498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.872533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.872778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.872812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.873081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.873116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.873328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.873362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.873662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.873696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.873981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.874017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.874176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.874209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.874406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.874440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.874584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.874619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.874793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.874807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.874919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.181 [2024-12-16 06:04:37.874954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.181 qpair failed and we were unable to recover it. 00:36:04.181 [2024-12-16 06:04:37.875164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.875197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.875352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.875392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.875710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.875723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.875942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.875956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.876097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.876111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.876274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.876288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.876521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.876555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.876897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.876933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.877159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.877194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.877399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.877433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.877566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.877579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.877793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.877807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.877921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.877933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.878167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.878180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.878355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.878368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.878599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.878635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.878908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.878944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.879181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.879216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.879426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.879461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.879725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.879739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.879928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.879942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.880123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.880136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.880369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.880382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.880549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.880563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.880654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.880666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.880817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.880831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.880957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.880993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.881127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.881161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.182 [2024-12-16 06:04:37.881333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.182 [2024-12-16 06:04:37.881368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.182 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.881634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.881669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.881895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.881932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.882134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.882168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.882379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.882413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.882604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.882617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.882780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.882814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.882972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.883010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.883290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.883325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.883475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.883509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.883718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.883753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.883983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.884018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.884220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.884265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.884503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.884536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.884790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.884825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.885052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.885087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.885249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.885283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.885543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.885578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.885799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.885832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.886004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.886039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.886204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.886238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.886555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.886590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.886817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.886863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.887146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.887180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.887334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.887369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.887697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.887731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.887981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.888017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.888233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.888268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.888578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.888612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.888929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.888964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.889220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.889254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.889488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.889522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.889769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.889783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.889947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.889961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.890106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.890140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.890290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.183 [2024-12-16 06:04:37.890325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.183 qpair failed and we were unable to recover it. 00:36:04.183 [2024-12-16 06:04:37.890587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.890621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.890843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.890887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.891046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.891080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.891363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.891398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.891651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.891665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.891805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.891819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.891984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.891998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.892106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.892120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.892233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.892266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.892455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.892489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.892784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.892818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.892978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.893012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.893288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.893324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.893465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.893479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.893650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.893685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.893902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.893937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.894147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.894181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.894322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.894338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.894519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.894553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.894790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.894824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.895098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.895133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.895331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.895365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.895661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.895695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.895906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.895941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.896164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.896198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.896324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.896338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.896576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.896611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.896839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.896882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.897044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.897079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.897213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.897248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.897524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.897565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.897692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.897707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.897887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.897925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.898083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.898117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.898327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.184 [2024-12-16 06:04:37.898361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.184 qpair failed and we were unable to recover it. 00:36:04.184 [2024-12-16 06:04:37.898572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.898588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.898671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.898683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.898842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.898865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.899010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.899023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.899178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.899192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.899355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.899369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.899530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.899569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.899778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.899815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.900021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.900057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.900273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.900307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.900518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.900533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.900700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.900734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.900943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.900979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.901282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.901318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.901451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.901487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.901696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.901734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.901997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.902032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.902237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.902272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.902435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.902470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.902724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.902737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.903015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.903031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.903123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.903145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.903309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.903325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.903485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.903501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.903695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.903708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.903964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.903979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.904816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.904845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.905172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.905211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.905361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.905396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.905631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.905668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.905865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.905880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.906046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.906059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.906229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.906263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.906527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.906562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.906881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.906918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.907126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.907162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.907380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.907417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.907639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.907673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.185 [2024-12-16 06:04:37.907885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.185 [2024-12-16 06:04:37.907923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.185 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.908059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.908093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.908383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.908420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.908644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.908658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.908772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.908787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.908893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.908906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.909069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.909083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.909345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.909360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.909463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.909478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.909636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.909652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.909795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.909829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.910123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.910162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.910466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.910502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.910688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.910723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.910831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.910843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.911022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.911036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.911250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.911264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.911369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.911384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.911488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.911501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.911740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.911754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.911856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.911870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.912750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.912779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.913029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.913043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.913213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.913250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.913463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.913505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.913714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.913750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.914051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.914089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.914381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.914417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.914623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.914657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.914830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.914843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.915031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.915045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.915199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.915245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.915398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.915433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.915654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.915691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.915937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.915952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.916149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.916166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.916308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.916322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.916575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.916611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.916767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.186 [2024-12-16 06:04:37.916802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.186 qpair failed and we were unable to recover it. 00:36:04.186 [2024-12-16 06:04:37.917041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.917081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.917300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.917335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.917641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.917674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.917917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.917932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.918120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.918134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.918328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.918363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.918676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.918712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.918934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.918949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.919113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.919149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.919401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.919415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.919510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.919524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.919757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.919771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.919931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.919946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.920164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.920200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.920413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.920447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.920728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.920763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.921037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.921074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.921355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.921391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.921698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.921731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.922012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.922048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.922188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.922222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.922481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.922517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.922798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.922832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.923136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.923173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.923435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.923469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.923606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.923623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.923865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.923905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.924112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.924146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.924356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.924391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.924608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.924642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.924777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.924792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.924984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.925023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.925211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.925244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.187 [2024-12-16 06:04:37.925461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.187 [2024-12-16 06:04:37.925496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.187 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.925731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.925766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.926023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.926060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.926212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.926248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.926389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.926424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.926714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.926727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.926838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.926859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.927041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.927076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.927339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.927374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.927684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.927720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.927991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.928029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.928311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.928345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.928627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.928664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.928947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.928984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.929217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.929251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.929418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.929455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.929653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.929687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.929985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.930000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.930147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.930161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.930357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.930372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.930618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.930652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.930789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.930824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.931046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.931082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.931327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.931363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.931564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.931580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.931737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.931773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.932070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.932107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.932312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.932346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.932475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.932511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.932820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.932865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.933150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.933185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.933396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.933430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.933601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.933617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.933695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.933707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.933845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.933864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.934033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.934069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.934354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.934388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.934583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.934618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.934904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.934941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.188 qpair failed and we were unable to recover it. 00:36:04.188 [2024-12-16 06:04:37.935129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.188 [2024-12-16 06:04:37.935163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.935447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.935484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.935770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.935805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.936041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.936078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.936290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.936326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.936488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.936502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.936593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.936606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.936763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.936777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.937013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.937029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.937114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.937126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.937338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.937351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.937530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.937565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.937878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.937916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.938201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.938236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.938570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.938603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.938801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.938837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.939001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.939035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.939295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.939329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.939530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.939564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.939757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.939794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.940041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.940077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.940268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.940302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.940500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.940534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.940736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.940771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.940918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.940955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.941107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.941145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.941371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.941405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.941593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.941606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.941703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.941716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.941901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.941916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.942084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.942119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.942256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.942290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.942488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.942523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.942801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.942817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.942978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.942992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.943225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.943241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.943456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.943491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.943704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.943740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.943898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.189 [2024-12-16 06:04:37.943913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.189 qpair failed and we were unable to recover it. 00:36:04.189 [2024-12-16 06:04:37.944098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.944111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.944260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.944275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.944446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.944480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.944694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.944730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.944956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.944992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.945183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.945217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.945353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.945389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.945578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.945593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.945824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.945868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.946127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.946162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.946380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.946414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.946664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.946679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.946905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.946921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.946990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.947004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.947169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.947183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.947447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.947461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.947697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.947712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.947897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.947913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.948154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.948189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.948448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.948485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.948667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.948681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.948864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.948878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.948972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.948984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.949152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.949166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.949329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.949364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.949562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.949576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.949795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.949832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.950108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.950145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.950450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.950487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.950683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.950697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.950914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.950948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.951210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.951248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.951478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.951514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.951746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.951782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.952064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.952111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.952323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.952359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.952610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.952624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.952703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.952739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.953000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.190 [2024-12-16 06:04:37.953037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.190 qpair failed and we were unable to recover it. 00:36:04.190 [2024-12-16 06:04:37.953229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.953264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.953543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.953578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.953839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.953886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.954026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.954039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.954287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.954321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.954589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.954625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.954922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.954960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.955232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.955267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.955472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.955508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.955816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.955877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.956052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.956067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.956210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.956224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.956447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.956460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.956693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.956728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.957041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.957079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.957360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.957396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.957617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.957661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.957820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.957833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.957932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.957946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.958165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.958179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.958335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.958349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.958556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.958570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.958718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.958744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.958915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.958951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.959198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.959235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.959392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.959425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.959565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.959579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.959822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.959868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.960155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.960189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.960405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.960441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.960691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.960705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.960866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.191 [2024-12-16 06:04:37.960882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.191 qpair failed and we were unable to recover it. 00:36:04.191 [2024-12-16 06:04:37.961068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.961104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.961386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.961420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.961618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.961654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.961809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.961826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.962068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.962083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.962260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.962275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.962496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.962532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.962795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.962831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.963125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.963141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.963300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.963313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.963467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.963481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.963711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.963745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.963957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.963995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.964328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.964363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.964636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.964672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.964952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.964988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.965271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.965307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.965572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.965607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.965900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.965938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.966234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.966269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.966577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.966614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.966890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.966926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.967123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.967159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.967396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.967430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.967717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.967753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.968024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.968040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.968214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.968229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.968468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.968503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.968699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.968733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.968874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.968911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.969125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2ad30 is same with the state(6) to be set 00:36:04.192 [2024-12-16 06:04:37.969429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.969508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.969786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.969837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.970140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.970162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.970342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.970377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.192 [2024-12-16 06:04:37.970643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.192 [2024-12-16 06:04:37.970680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.192 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.970933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.970972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.971178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.971215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.971484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.971533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.971709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.971728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.971920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.971957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.972253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.972289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.972557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.972594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.972880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.972901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.973037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.973063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.973202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.973238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.973436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.973473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.973673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.973709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.973842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.973869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.974121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.974157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.974376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.974412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.974612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.974647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.974843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.974870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.975114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.975134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.975294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.975314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.975472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.975491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.975593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.975613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.975779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.975798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.976028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.976051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.976313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.976350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.976487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.976523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.976808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.976843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.977044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.977082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.977279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.977314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.977511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.977548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.977743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.977777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.978066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.978103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.978386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.978421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.978719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.978754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.978958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.978980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.979148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.979168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.979368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.979411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.979711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.979746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.193 qpair failed and we were unable to recover it. 00:36:04.193 [2024-12-16 06:04:37.979941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.193 [2024-12-16 06:04:37.979978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.980246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.980282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.980596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.980633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.980827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.980872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.981082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.981118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.981308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.981345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.981473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.981511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.981708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.981742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.981881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.981918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.982200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.982235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.982469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.982507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.982683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.982721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.983043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.983085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.983307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.983356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.983501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.983536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.983746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.983782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.984021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.984062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.984279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.984315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.984555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.984592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.984824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.984874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.985174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.985211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.985371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.985411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.985697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.985733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.985929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.985971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.986129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.986148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.986407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.986434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.986612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.986631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.986813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.986860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.987092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.987130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.987322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.987361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.987513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.987547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.987860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.987899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.988143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.988163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.988345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.988366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.988607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.988626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.988731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.988748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.988986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.194 [2024-12-16 06:04:37.989009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.194 qpair failed and we were unable to recover it. 00:36:04.194 [2024-12-16 06:04:37.989225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.989244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.989426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.989447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.989724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.989759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.989907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.989943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.990134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.990171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.990429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.990468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.990677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.990713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.990900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.990948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.991175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.991194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.991280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.991296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.991516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.991537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.991649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.991670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.991781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.991802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.991977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.991999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.992174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.992193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.992320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.992362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.992562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.992578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.992687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.992701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.992802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.992817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.993007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.993044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.993307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.993343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.993617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.993654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.993911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.993947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.994160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.994197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.994396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.994433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.994574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.994609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.994752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.994788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.994920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.994957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.995199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.995219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.995455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.995473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.995573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.995585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.995741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.995755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.995863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.995876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.995952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.995966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.996119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.996155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.996351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.996388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.996586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.996621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.996837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.195 [2024-12-16 06:04:37.996862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.195 qpair failed and we were unable to recover it. 00:36:04.195 [2024-12-16 06:04:37.997005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.997019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.997243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.997257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.997356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.997369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.997526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.997540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.997781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.997816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.997983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.998021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.998221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.998255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.998390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.998428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.998579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.998612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.998832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.998884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.999010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.999046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.999165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.999198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.999459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.196 [2024-12-16 06:04:37.999494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.196 qpair failed and we were unable to recover it. 00:36:04.196 [2024-12-16 06:04:37.999682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:37.999718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:37.999976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:37.999991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.000098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.000113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.000262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.000277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.000442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.000457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.000609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.000624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.000788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.000801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.000997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.001012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.001105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.001118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.001334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.001349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.001446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.001458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.001700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.001715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.001861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.001875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.002089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.002104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.002247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.002261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.002421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.002436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.002506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.002517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.002674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.002691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.002780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.002792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.003023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.003038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.003220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.003234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.485 [2024-12-16 06:04:38.003340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.485 [2024-12-16 06:04:38.003355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.485 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.003566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.003581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.003755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.003768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.003910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.003924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.004109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.004124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.004222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.004235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.004394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.004408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.004498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.004511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.004682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.004718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.004872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.004907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.005178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.005215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.005420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.005455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.005660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.005696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.005833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.005853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.005958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.005970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.006137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.006151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.006329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.006343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.006428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.006474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.006617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.006651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.006782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.006818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.007090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.007125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.007385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.007422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.007707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.007743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.007988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.008034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.008242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.008277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.008409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.008444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.008658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.008694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.008895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.008934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.009058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.009093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.009366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.009403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.009532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.009567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.009704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.009740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.009962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.010011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.010184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.010203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.010307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.010324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.010542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.010561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.010731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.010750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.010923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.010960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.011157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.486 [2024-12-16 06:04:38.011194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.486 qpair failed and we were unable to recover it. 00:36:04.486 [2024-12-16 06:04:38.011438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.011476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.011687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.011723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.011980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.012019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.012303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.012338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.012627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.012662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.012984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.013022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.013291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.013327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.013598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.013634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.014433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.014469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.014761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.014781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.015031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.015052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.015304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.015328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.015624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.015645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.015885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.015905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.016096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.016116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.016365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.016387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.016603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.016624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.016782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.016801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.017000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.017022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.017256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.017276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.017485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.017508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.017703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.017724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.017908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.017928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.018152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.018171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.018339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.018359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.018453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.018470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.018661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.018682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.018807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.018828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.019002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.019023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.019248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.019288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.019554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.019570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.019788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.019803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.020038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.020054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.020148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.020160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.020323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.020339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.020496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.020510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.020696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.020711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.020944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.020959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.021153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.021171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.487 qpair failed and we were unable to recover it. 00:36:04.487 [2024-12-16 06:04:38.021401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.487 [2024-12-16 06:04:38.021416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.021576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.021590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.021808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.021821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.021992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.022008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.022161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.022175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.022387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.022403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.022628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.022643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.022751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.022765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.022861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.022874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.023109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.023124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.023288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.023303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.023390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.023403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.023672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.023686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.023794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.023809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.023892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.023905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.024043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.024056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.024277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.024290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.024392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.024407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.024507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.024521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.024729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.024743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.024913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.024927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.025010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.025022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.025177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.025191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.025428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.025443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.025534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.025546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.025636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.025648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.025884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.025901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.025969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.025982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.026142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.026157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.026302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.026316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.026487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.026502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.026650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.026663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.026741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.026754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.026902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.026918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.026999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.027010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.027111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.027125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.027284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.027299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.027456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.027471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.027613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.027628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.488 qpair failed and we were unable to recover it. 00:36:04.488 [2024-12-16 06:04:38.027711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.488 [2024-12-16 06:04:38.027726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.027820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.027833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.028045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.028059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.028150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.028162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.028343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.028357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.028561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.028575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.028772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.028788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.028963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.028977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.029081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.029097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.029312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.029326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.029435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.029471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.029676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.029710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.029979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.030015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.030215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.030250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.030499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.030534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.030714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.030727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.030864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.030901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.031157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.031192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.031378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.031415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.031559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.031594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.031878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.031913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.032106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.032120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.032271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.032307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.032585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.032620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.032836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.032886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.033169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.033204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.033412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.033449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.033653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.033689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.033941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.033956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.034127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.034163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.034370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.034406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.034658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.034693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.489 qpair failed and we were unable to recover it. 00:36:04.489 [2024-12-16 06:04:38.034998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.489 [2024-12-16 06:04:38.035033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.035339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.035374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.035565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.035600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.035733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.035768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.035975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.035990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.036094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.036108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.036339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.036353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.036431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.036443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.036576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.036630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.036832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.036893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.036989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.037005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.037260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.037295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.037494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.037529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.037790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.037825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.038120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.038133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.038357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.038371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.038607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.038621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.038716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.038731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.038991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.039005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.039097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.039109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.039329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.039343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.039521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.039556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.039705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.039740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.040029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.040044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.040147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.040162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.040373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.040388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.040565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.040579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.040677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.040690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.040794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.040808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.040969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.040983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.041195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.041208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.041308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.041323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.041415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.041428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.041576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.041623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.041832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.041891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.042234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.042251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.042416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.042430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.042583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.042597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.042753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.490 [2024-12-16 06:04:38.042766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.490 qpair failed and we were unable to recover it. 00:36:04.490 [2024-12-16 06:04:38.042982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.042997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.043238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.043252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.043349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.043362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.043454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.043465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.043632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.043646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.043883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.043899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.044063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.044077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.044190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.044204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.044289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.044301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.044457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.044473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.044715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.044730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.044887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.044902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.045039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.045054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.045208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.045221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.045361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.045375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.045465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.045478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.045711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.045724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.045869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.045883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.046045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.046059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.046223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.046238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.046445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.046458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.046610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.046625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.046870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.046886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.047068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.047082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.047313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.047325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.047484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.047499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.047738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.047752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.047868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.047883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.047972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.047986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.048128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.048140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.048375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.048388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.048618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.048632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.048794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.048807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.048969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.048984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.049133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.049146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.049357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.049371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.049512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.049527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.049676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.049690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.049788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.491 [2024-12-16 06:04:38.049799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.491 qpair failed and we were unable to recover it. 00:36:04.491 [2024-12-16 06:04:38.049897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.049908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.050080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.050093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.050304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.050319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.050459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.050472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.050641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.050656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.050814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.050828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.051053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.051067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.051297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.051312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.051462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.051476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.051578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.051591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.051823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.051837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.052068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.052082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.052335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.052350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.052576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.052591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.052692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.052706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.052889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.052904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.052988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.053000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.053139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.053153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.053306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.053321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.053418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.053430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.053691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.053705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.053866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.053880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.054100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.054114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.054199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.054211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.054375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.054389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.054566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.054579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.054678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.054690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.054823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.054837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.054988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.055001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.055232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.055245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.055413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.055439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.055600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.055613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.055698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.055710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.055912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.055926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.056073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.056088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.056165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.056178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.056408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.056421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.056580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.056595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.056811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.492 [2024-12-16 06:04:38.056824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.492 qpair failed and we were unable to recover it. 00:36:04.492 [2024-12-16 06:04:38.057052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.057067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.057159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.057171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.057403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.057416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.057574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.057587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.057737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.057750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.057978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.057991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.058206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.058219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.058325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.058338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.058419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.058432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.058569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.058582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.058809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.058822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.058980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.058995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.059084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.059097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.059274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.059288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.059446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.059459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.059551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.059563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.059793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.059806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.059941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.059954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.060088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.060101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.060335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.060349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.060527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.060540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.060747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.060761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.060920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.060933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.061145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.061159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.061272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.061286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.061384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.061396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.061494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.061508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.061711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.061724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.061912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.061926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.062069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.062084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.062167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.062179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.062260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.062273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.062429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.062442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.062587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.062601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.062671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.062683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.062816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.062830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.063042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.063056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.063299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.063312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.063422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.493 [2024-12-16 06:04:38.063437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.493 qpair failed and we were unable to recover it. 00:36:04.493 [2024-12-16 06:04:38.063578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.063591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.063672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.063683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.063760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.063772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.063863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.063876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.064120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.064151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.064338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.064351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.064525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.064537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.064687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.064701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.064954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.064968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.065038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.065050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.065207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.065219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.065314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.065328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.065535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.065548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.065765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.065778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.066063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.066078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.066305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.066318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.066485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.066497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.066708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.066722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.066812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.066824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.066912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.066925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.067169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.067183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.067268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.067279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.067463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.067476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.067580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.067592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.067802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.067816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.068028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.068042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.068236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.068249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.068477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.068491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.068716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.068730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.068898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.068912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.069183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.069197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.069283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.069294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.069499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.069513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.069752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.494 [2024-12-16 06:04:38.069765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.494 qpair failed and we were unable to recover it. 00:36:04.494 [2024-12-16 06:04:38.069928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.069943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.070080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.070093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.070229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.070242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.070426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.070439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.070693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.070705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.070816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.070832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.071060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.071075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.071154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.071165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.071323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.071336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.071554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.071567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.071638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.071650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.071799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.071812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.071982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.071997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.072139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.072152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.072306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.072319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.072409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.072421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.072641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.072656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.072863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.072877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.073017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.073029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.073178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.073191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.073334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.073347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.073494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.073509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.073686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.073699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.073791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.073802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.073950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.073964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.074146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.074160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.074297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.074312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.074517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.074529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.074768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.074781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.074944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.074957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.075062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.075076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.075212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.075225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.075421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.075434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.075600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.075613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.075819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.075832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.076028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.076042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.076183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.076197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.076294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.076309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.495 [2024-12-16 06:04:38.076468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.495 [2024-12-16 06:04:38.076480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.495 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.076616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.076630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.076693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.076706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.076857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.076871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.077031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.077044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.077182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.077195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.077391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.077405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.077630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.077645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.077790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.077803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.077964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.077977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.078183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.078197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.078422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.078435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.078525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.078539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.078719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.078732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.078869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.078882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.079020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.079034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.079283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.079296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.079528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.079543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.079721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.079734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.079939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.079952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.080098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.080111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.080203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.080215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.080371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.080384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.080561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.080576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.080731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.080744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.080888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.080901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.081128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.081141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.081273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.081287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.081495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.081508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.081623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.081637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.081770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.081782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.081935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.081948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.082146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.082159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.082382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.082395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.082541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.082554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.082782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.082796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.082864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.082876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.083055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.083068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.083222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.083235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.496 [2024-12-16 06:04:38.083484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.496 [2024-12-16 06:04:38.083496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.496 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.083728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.083741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.083813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.083824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.083970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.083983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.084131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.084145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.084215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.084227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.084322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.084333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.084484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.084496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.084573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.084586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.084690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.084701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.084844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.084865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.085939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.085951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.086969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.086980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.087926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.497 [2024-12-16 06:04:38.087992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.497 [2024-12-16 06:04:38.088003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.497 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.088153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.088165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.088241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.088253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.088322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.088333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.088423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.088435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.088654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.088666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.088801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.088814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.088911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.088924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.089001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.089012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.089160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.089174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.089409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.089422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.089527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.089540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.089623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.089636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.089793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.089805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.089960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.089975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.090974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.090987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.091059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.091070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.091214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.091228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.091375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.091388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.091477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.091490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.091567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.091577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.091734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.091746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.091901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.091916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.092066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.092079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.092217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.092229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.092305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.092316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.092394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.092406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.092553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.498 [2024-12-16 06:04:38.092566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.498 qpair failed and we were unable to recover it. 00:36:04.498 [2024-12-16 06:04:38.092791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.092804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.092889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.092900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.092994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.093107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.093259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.093375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.093527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.093617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.093714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.093867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.093880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.094036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.094049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.094265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.094278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.094425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.094438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.094584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.094598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.094736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.094751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.094917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.094931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.095109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.095122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.095347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.095360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.095496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.095509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.095641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.095654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.095867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.095881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.095980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.095992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.096159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.096173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.096341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.096355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.096577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.096591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.096678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.096690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.096839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.096858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.096987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.096999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.097227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.097239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.097448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.097460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.097661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.097675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.097856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.097869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.098083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.098095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.098238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.098252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.098340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.098352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.098566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.098579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.098794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.098806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.099027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.099041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.099187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.499 [2024-12-16 06:04:38.099199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.499 qpair failed and we were unable to recover it. 00:36:04.499 [2024-12-16 06:04:38.099280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.099292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.099455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.099468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.099558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.099571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.099713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.099727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.099878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.099892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.100061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.100074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.100296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.100310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.100485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.100500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.100645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.100658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.100721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.100734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.100956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.100970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.101116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.101129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.101330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.101343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.101478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.101490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.101739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.101752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.101903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.101918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.102121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.102134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.102230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.102243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.102382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.102395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.102557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.102572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.102726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.102739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.102891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.102903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.103081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.103094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.103230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.103244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.103309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.103320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.103389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.103400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.103564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.103576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.103737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.103750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.104898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.104911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.105109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.105123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.105212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.105223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.105405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.500 [2024-12-16 06:04:38.105418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.500 qpair failed and we were unable to recover it. 00:36:04.500 [2024-12-16 06:04:38.105514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.105528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.105692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.105705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.105779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.105791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.105967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.105997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.106178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.106196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.106465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.106483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.106719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.106738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.106976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.106996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.107227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.107245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.107459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.107477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.107651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.107671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.107825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.107844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.108010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.108025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.108163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.108176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.108378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.108392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.108538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.108550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.108695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.108708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.108804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.108818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.108895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.108908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.109048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.109061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.109194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.109206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.109339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.109351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.109573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.109586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.109683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.109697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.109834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.109853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.110935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.110949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.111084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.111098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.111230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.111242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.111465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.111478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.111627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.111640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.111840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.111858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.112008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.501 [2024-12-16 06:04:38.112020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.501 qpair failed and we were unable to recover it. 00:36:04.501 [2024-12-16 06:04:38.112178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.112191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.112413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.112425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.112566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.112578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.112741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.112754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.112953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.112967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.113120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.113133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.113212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.113222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.113423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.113436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.113661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.113675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.113824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.113837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.113929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.113940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.114103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.114116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.114288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.114301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.114470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.114482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.114681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.114694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.114864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.114878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.115118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.115130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.115284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.115296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.115454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.115468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.115718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.115729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.115896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.115910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.116108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.116122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.116276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.116288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.116443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.116456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.116535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.116546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.116714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.116726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.116869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.116883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.117100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.117113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.117206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.117218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.117346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.117358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.117508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.117522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.117656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.117670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.117819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.117832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.118046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.118059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.118286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.118298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.118366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.118378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.118526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.118539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.118675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.118688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.502 [2024-12-16 06:04:38.118829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.502 [2024-12-16 06:04:38.118842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.502 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.119941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.119955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.120036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.120048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.120122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.120135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.120332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.120347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.120570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.120583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.120715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.120728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.120958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.120972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.121193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.121206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.121467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.121481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.121699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.121711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.121856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.121870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.122006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.122018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.122160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.122172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.122398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.122411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.122538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.122551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.122694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.122707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.122811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.122825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.122966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.122979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.123130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.123143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.123297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.123309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.123517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.123529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.123604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.123615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.123833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.123851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.124058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.124070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.124275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.124288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.124371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.124383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.124589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.503 [2024-12-16 06:04:38.124602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.503 qpair failed and we were unable to recover it. 00:36:04.503 [2024-12-16 06:04:38.124766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.124779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.125022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.125037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.125214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.125226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.125449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.125462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.125698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.125711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.125861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.125874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.126022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.126035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.126110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.126120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.126234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.126245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.126399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.126411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.126549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.126561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.126719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.126733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.126907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.126920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.127118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.127131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.127344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.127357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.127578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.127590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.127689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.127701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.127867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.127879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.127950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.127961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.128028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.128039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.128191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.128203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.128402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.128416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.128650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.128662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.128795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.128807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.128969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.128982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.129075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.129087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.129168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.129179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.129260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.129271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.129400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.129411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.129547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.129560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.129631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.129641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.129864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.129877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.130060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.130073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.130266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.130279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.130431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.130443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.130587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.130600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.130735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.130748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.130947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.130960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.131096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.504 [2024-12-16 06:04:38.131108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.504 qpair failed and we were unable to recover it. 00:36:04.504 [2024-12-16 06:04:38.131275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.131287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.131427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.131440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.131593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.131606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.131704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.131716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.131884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.131897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.132148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.132160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.132320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.132332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.132575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.132588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.132671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.132684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.132768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.132778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.132969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.132982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.133222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.133235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.133454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.133469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.133633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.133646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.133787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.133799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.133946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.133959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.134100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.134113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.134275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.134288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.134369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.134380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.134583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.134597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.134805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.134818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.134964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.134978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.135140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.135152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.135296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.135308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.135443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.135455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.135627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.135650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.135820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.135833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.135973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.135986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.136135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.136148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.136297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.136311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.136447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.136459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.136619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.136631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.136710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.136722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.136789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.136801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.136946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.136959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.137192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.137205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.137423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.137435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.137695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.137707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.137843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.505 [2024-12-16 06:04:38.137859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.505 qpair failed and we were unable to recover it. 00:36:04.505 [2024-12-16 06:04:38.138059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.138072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.138211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.138223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.138384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.138397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.138556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.138570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.138730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.138742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.138901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.138915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.139049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.139060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.139212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.139225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.139422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.139435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.139643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.139656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.139854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.139868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.140089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.140101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.140193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.140206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.140344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.140361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.140469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.140482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.140631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.140644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.140873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.140888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.141031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.141043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.141260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.141274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.141408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.141421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.141567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.141581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.141660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.141671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.141817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.141830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.142023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.142037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.142130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.142142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.142224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.142236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.142315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.142326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.142527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.142540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.142705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.142718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.142873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.142885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.143016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.143028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.143171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.143184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.143329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.143342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.143440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.143453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.143533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.143545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.143686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.143699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.143829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.143842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.144003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.144015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.144166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.506 [2024-12-16 06:04:38.144178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.506 qpair failed and we were unable to recover it. 00:36:04.506 [2024-12-16 06:04:38.144315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.144327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.144542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.144555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.144630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.144642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.144714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.144725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.144973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.144987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.145078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.145092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.145181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.145194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.145396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.145410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.145558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.145571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.145711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.145725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.145924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.145938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.146133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.146145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.146286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.146298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.146435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.146446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.146548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.146563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.146784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.146796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.147019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.147032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.147181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.147194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.147415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.147428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.147644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.147656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.147822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.147834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.148037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.148049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.148252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.148265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.148422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.148435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.148588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.148600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.148796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.148808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.149036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.149049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.149328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.149340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.149549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.149561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.149708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.149721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.149886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.149898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.149979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.149990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.150211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.150226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.150289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.507 [2024-12-16 06:04:38.150300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.507 qpair failed and we were unable to recover it. 00:36:04.507 [2024-12-16 06:04:38.150378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.150389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.150522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.150535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.150701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.150714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.150795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.150807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.151002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.151014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.151182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.151195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.151344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.151356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.151487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.151499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.151661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.151674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.151852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.151865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.151999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.152012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.152100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.152112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.152259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.152272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.152348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.152359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.152568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.152584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.152792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.152805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.152902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.152916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.152991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.153218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.153320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.153486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.153577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.153669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.153787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.153904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.153916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.154074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.154086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.154239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.154252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.154440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.154453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.154690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.154703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.154801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.154814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.155583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.155607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.155768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.155782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.156008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.156022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.156164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.156177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.156324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.156336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.156483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.156497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.156699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.156712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.156808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.156820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.156910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.508 [2024-12-16 06:04:38.156923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.508 qpair failed and we were unable to recover it. 00:36:04.508 [2024-12-16 06:04:38.157023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.157899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.157988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.158974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.158984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.159954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.159966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.160858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.160869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.161008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.161020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.161150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.161163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.161229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.161240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.161303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.509 [2024-12-16 06:04:38.161314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.509 qpair failed and we were unable to recover it. 00:36:04.509 [2024-12-16 06:04:38.161384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.161395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.161464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.161475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.161552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.161564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.161622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.161633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.161767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.161781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.161866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.161878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.161947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.161959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.162959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.162970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.163930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.163942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.164876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.164889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.165037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.165049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.165124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.165134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.165209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.165219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.165338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.165350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.165428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.165438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.165507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.510 [2024-12-16 06:04:38.165517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.510 qpair failed and we were unable to recover it. 00:36:04.510 [2024-12-16 06:04:38.165579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.165590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.165667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.165679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.165805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.165817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.165907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.165921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.166972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.166985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.167076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.167088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.167169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.167181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.167313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.167326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.167482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.167494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.167556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.167567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.167645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.167657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.167896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.167910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.168973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.168986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.169813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.169825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.170003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.511 [2024-12-16 06:04:38.170017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.511 qpair failed and we were unable to recover it. 00:36:04.511 [2024-12-16 06:04:38.170165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.170177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.170316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.170328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.170524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.170537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.170599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.170610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.170717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.170730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.170808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.170820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.171025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.171040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.171174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.171187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.171283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.171295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.171516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.171529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.171747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.171759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.171925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.171942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.172075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.172087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.172222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.172235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.172446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.172458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.172590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.172602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.172747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.172759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.172968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.172982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.173078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.173090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.173249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.173261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.173392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.173404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.173486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.173499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.173663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.173676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.173815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.173827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.174050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.174063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.174262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.174275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.174486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.174498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.174685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.174699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.174927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.174939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.175182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.175194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.175439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.175452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.175608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.175622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.175791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.175804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.512 [2024-12-16 06:04:38.175956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.512 [2024-12-16 06:04:38.175969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.512 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.176168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.176181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.176379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.176392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.176534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.176546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.176765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.176777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.177000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.177013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.177089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.177100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.177310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.177323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.177470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.177482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.177691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.177703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.177793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.177806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.177962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.177975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.178115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.178128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.178262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.178274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.178416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.178428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.178666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.178678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.178829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.178841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.179046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.179059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.179223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.179239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.179451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.179463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.179603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.179615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.179757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.179769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.179913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.179928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.180067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.180080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.180260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.180272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.180353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.180365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.180552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.180564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.180705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.180718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.180937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.180950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.181018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.181028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.181176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.181189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.181353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.181366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.181501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.181513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.181596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.181609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.181759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.181772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.181906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.181919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.182007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.182020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.182167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.182179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.182328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.182341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-16 06:04:38.182599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.513 [2024-12-16 06:04:38.182611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.182827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.182839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.182974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.182987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.183203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.183215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.183380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.183394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.183480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.183492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.183636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.183648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.183828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.183840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.183990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.184002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.184153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.184166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.184352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.184364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.184505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.184517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.184665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.184679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.184810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.184822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.185054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.185067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.185233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.185245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.185315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.185327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.185421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.185434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.185567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.185579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.185675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.185690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.185844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.185860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.186017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.186030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.186117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.186129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.186332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.186345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.186579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.186592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.186732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.186744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.186960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.186974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.187119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.187133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.187270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.187282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.187413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.187426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.187574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.187587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.187718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.187730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.187873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.187886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.188049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.188061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.188278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.188290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.188439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.188451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.188579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.188591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.188720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.188732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.188977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.188990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-16 06:04:38.189085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.514 [2024-12-16 06:04:38.189098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.189279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.189293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.189376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.189387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.189483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.189496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.189659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.189673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.189844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.189861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.190016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.190028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.190197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.190210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.190463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.190475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.190645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.190657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.190787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.190799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.190996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.191008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.191206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.191218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.191362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.191374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.191551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.191563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.191737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.191749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.191972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.191985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.192129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.192140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.192317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.192330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.192395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.192408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.192520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.192533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.192755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.192769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.192907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.192921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.193004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.193017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.193172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.193185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.193333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.193345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.193480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.193492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.193720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.193732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.193876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.193889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.194031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.194045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.194244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.194256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.194453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.194466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.194666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.194678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.194821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.194833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.195062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.195074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.195225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.195237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.195322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.195332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.195527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.195540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.195684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.195696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-16 06:04:38.195866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.515 [2024-12-16 06:04:38.195880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.196050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.196063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.196220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.196233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.196374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.196386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.196631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.196644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.196777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.196790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.196920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.196932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.197071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.197083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.197214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.197226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.197443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.197456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.197555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.197565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.197828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.197840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.198002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.198014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.198229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.198243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.198390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.198402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.198591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.198604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.198740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.198753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.198996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.199008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.199230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.199241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.199334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.199346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.199506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.199519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.199738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.199754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.199886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.199899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.200125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.200138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.200335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.200347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.200488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.200501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.200664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.200676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.200755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.200767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.200839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.200857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.201023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.201036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.201122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.201133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.201282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.201295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.201376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.201386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.201525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.201539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.516 [2024-12-16 06:04:38.201703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.516 [2024-12-16 06:04:38.201715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.516 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.201804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.201815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.201950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.201963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.202110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.202122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.202203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.202214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.202359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.202371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.202502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.202515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.202664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.202677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.202758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.202770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.202904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.202917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.203184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.203197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.203294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.203306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.203391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.203401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.203620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.203633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.203795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.203808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.203887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.203898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.204180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.204193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.204277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.204288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.204504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.204516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.204677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.204690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.204824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.204837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.205058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.205071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.205282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.205295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.205440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.205453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.205602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.205615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.205762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.205775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.205928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.205942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.206117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.206132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.206225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.206238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.206313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.206323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.206386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.206396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.206544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.206557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.206758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.206772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.206985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.206998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.207063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.207074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.207225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.207238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.207370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.207382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.207647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.207659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.207721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.207731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.517 [2024-12-16 06:04:38.207930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.517 [2024-12-16 06:04:38.207943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.517 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.208089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.208101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.208262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.208275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.208352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.208363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.208575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.208587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.208755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.208768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.208964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.208977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.209061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.209072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.209212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.209225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.209441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.209453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.209595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.209608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.209832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.209844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.210068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.210081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.210324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.210336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.210562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.210576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.210724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.210738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.210953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.210966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.211039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.211049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.211245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.211258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.211401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.211415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.211577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.211591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.211776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.211788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.211869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.211881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.212074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.212087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.212241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.212253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.212456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.212468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.212621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.212634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.212717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.212728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.212797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.212811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.212977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.212990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.213121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.213133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.213214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.213226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.213317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.213328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.213415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.213426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.213656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.213668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.213928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.213941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.214164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.214176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.214418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.214430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.214587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.214601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.518 [2024-12-16 06:04:38.214796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.518 [2024-12-16 06:04:38.214807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.518 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.214871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.214882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.215014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.215025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.215232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.215245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.215320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.215330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.215406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.215417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.215637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.215649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.215865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.215879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.216110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.216122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.216365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.216378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.216510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.216521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.216674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.216687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.216843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.216859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.217000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.217013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.217169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.217181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.217402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.217413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.217508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.217536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.217685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.217704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.217861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.217881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.218116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.218133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.218211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.218228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.218390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.218408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.218499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.218513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.218773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.218786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.218995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.219009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.219147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.219159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.219398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.219411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.219621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.219633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.219709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.219721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.219940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.219955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.220131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.220143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.220364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.220376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.220518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.220530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.220624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.220636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.220872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.220884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.221039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.221052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.221202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.221214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.221346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.221359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.221560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.221572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.519 [2024-12-16 06:04:38.221704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.519 [2024-12-16 06:04:38.221716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.519 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.221856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.221870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.221960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.221971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.222113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.222127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.222340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.222352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.222488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.222501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.222642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.222655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.222738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.222750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.222914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.222927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.223095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.223106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.223195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.223207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.223278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.223289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.223523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.223536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.223692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.223705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.223942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.223956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.224153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.224167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.224300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.224313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.224577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.224602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.224765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.224784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.224869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.224888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.224995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.225015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.225102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.225118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.225227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.225244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.225334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.225347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.225478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.225490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.225719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.225732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.225961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.225974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.226148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.226161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.226295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.226307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.226385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.226396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.226534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.226546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.226761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.226774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.226906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.226919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.227093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.227107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.227182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.227193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.227276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.227288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.227514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.227527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.227606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.227617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.227698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.227708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.520 [2024-12-16 06:04:38.227854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.520 [2024-12-16 06:04:38.227867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.520 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.228017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.228029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.228183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.228197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.228343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.228355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.228514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.228527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.228681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.228694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.228914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.228927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.229009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.229021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.229259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.229272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.229413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.229427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.229494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.229505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.229722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.229734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.229956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.229968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.230131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.230144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.230274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.230288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.230366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.230376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.230522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.230535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.230628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.230639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.230796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.230811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.230962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.230976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.231180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.231192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.231332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.231344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.231499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.231511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.231726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.231739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.231905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.231919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.232063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.232076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.232208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.232220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.232360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.232373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.232579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.232592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.232815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.232828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.233030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.233042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.233177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.233189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.233329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.233341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.233474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.233487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.233578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.521 [2024-12-16 06:04:38.233590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.521 qpair failed and we were unable to recover it. 00:36:04.521 [2024-12-16 06:04:38.233719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.233732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.233859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.233871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.234001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.234013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.234107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.234120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.234251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.234263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.234407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.234419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.234500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.234511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.234716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.234728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.234955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.234968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.235186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.235198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.235345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.235358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.235455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.235467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.235692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.235706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.235862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.235875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.235958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.235969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.236193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.236207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.236348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.236360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.236583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.236596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.236818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.236832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.237054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.237067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.237316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.237328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.237577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.237589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.237811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.237823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.237965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.237981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.238046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.238057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.238204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.238216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.238450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.238462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.238604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.238617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.238855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.238869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.239022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.239034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.239168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.239180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.239398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.239411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.239505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.239517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.239606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.239618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.239816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.239828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.239998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.240010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.240233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.240245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.240387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.240399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.522 [2024-12-16 06:04:38.240480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.522 [2024-12-16 06:04:38.240491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.522 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.240574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.240584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.240668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.240680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.240819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.240832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.241035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.241048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.241177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.241189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.241332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.241344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.241435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.241447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.241660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.241672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.241870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.241884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.242070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.242083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.242228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.242240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.242319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.242330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.242525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.242538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.242733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.242745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.242876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.242890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.243018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.243030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.243118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.243129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.243278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.243292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.243369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.243380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.243597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.243609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.243874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.243886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.244057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.244069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.244211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.244225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.244355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.244366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.244527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.244543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.244682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.244694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.244914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.244927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.245066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.245078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.245284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.245296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.245508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.245520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.245662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.245676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.245842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.245859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.245999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.246011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.246168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.246183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.246443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.246456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.246618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.246631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.246839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.246856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.247002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.247015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.523 [2024-12-16 06:04:38.247153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.523 [2024-12-16 06:04:38.247166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.523 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.247424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.247436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.247529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.247541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.247738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.247751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.247972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.247984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.248212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.248226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.248393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.248406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.248677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.248689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.248822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.248834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.248972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.248985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.249137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.249149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.249283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.249295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.249445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.249457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.249605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.249617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.249764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.249777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.249906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.249919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.250092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.250104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.250249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.250261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.250498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.250510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.250727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.250741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.251024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.251038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.251189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.251201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.251294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.251306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.251452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.251465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.251611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.251623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.251843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.251861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.252021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.252035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.252200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.252213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.252410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.252423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.252570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.252583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.252670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.252681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.252897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.252910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.524 [2024-12-16 06:04:38.253948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.524 [2024-12-16 06:04:38.253960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.524 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.254183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.254197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.254338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.254350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.254571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.254583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.254810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.254822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.254983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.254997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.255144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.255157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.255289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.255303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.255367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.255378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.255528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.255540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.255738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.255750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.255809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.255820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.256028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.256040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.256261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.256274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.256444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.256457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.256638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.256650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.256733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.256744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.256954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.256968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.257191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.257203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.257432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.257445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.257614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.257626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.257861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.257874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.258030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.258042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.258284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.258296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.258451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.258463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.258663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.258675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.258817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.258829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.258905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.258921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.259138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.259151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.259355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.259366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.259528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.259541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.259736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.259749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.259923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.259935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.260079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.525 [2024-12-16 06:04:38.260092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.525 qpair failed and we were unable to recover it. 00:36:04.525 [2024-12-16 06:04:38.260286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.260297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.260505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.260517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.260725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.260738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.260904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.260916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.261133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.261145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.261353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.261367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.261527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.261540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.261684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.261697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.261763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.261774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.261856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.261868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.262066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.262080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.262327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.262339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.262564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.262577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.262725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.262738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.262984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.262996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.263137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.263149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.263296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.263308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.263526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.263538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.263674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.263685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.263831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.263843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.263995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.264008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.264170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.264183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.264334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.264346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.264498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.264510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.264668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.264681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.264828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.264841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.264993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.265005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.265255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.265268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.265347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.265358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.265484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.265496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.265694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.265708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.265775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.265786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.265865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.265877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.266030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.266044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.266182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.266195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.266321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.266335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.266466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.266479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.266553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.266563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.266775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.266788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.526 qpair failed and we were unable to recover it. 00:36:04.526 [2024-12-16 06:04:38.266984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.526 [2024-12-16 06:04:38.266997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.267191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.267203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.267446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.267458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.267623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.267636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.267819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.267831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.267978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.267991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.268216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.268228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.268397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.268409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.268537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.268549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.268776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.268789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.268869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.268881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.269087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.269100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.269247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.269260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.269400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.269412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.269580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.269592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.269744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.269756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.269893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.269906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.270041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.270054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.270139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.270149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.270350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.270363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.270506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.270519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.270794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.270808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.271003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.271017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.271166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.271179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.271422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.271436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.274067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.274080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.274282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.274293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.274510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.274522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.274736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.274749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.274907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.274920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.275009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.275020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.275171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.275184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.275347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.275359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.275513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.275526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.275701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.275716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.275876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.275888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.276094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.276106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.276241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.276254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.276393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.527 [2024-12-16 06:04:38.276406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.527 qpair failed and we were unable to recover it. 00:36:04.527 [2024-12-16 06:04:38.276555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.276567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.276639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.276649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.276864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.276878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.277046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.277058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.277288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.277301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.277466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.277480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.277642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.277655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.277841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.277865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.278053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.278150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.278303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.278447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.278642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.278761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.278902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.278996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.279009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.279253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.279265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.279356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.279367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.279536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.279548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.279634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.279646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.279868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.279881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.280045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.280058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.280256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.280269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.280463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.280476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.280615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.280627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.280799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.280812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.280948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.280961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.281038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.281049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.281177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.281189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.281251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.281262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.281455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.281468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.281725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.281739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.281944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.281957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.282117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.282130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.282323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.282335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.282481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.282496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.282658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.282670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.282817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.282829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.283036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.283051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.283254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.283266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.283462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.283474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.528 [2024-12-16 06:04:38.283632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.528 [2024-12-16 06:04:38.283646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.528 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.283786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.283798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.283934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.283948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.284082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.284095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.284164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.284176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.284322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.284334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.284464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.284477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.284551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.284562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.284785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.284797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.284945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.284958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.285198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.285211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.285340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.285353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.285517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.285529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.285694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.285706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.285859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.285872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.286018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.286031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.286124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.286134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.286297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.286310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.286507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.286520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.286692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.286704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.286855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.286867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.287066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.287079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.287230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.287243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.287374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.287386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.287580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.287593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.287742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.287755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.287893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.287906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.288056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.288068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.288238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.288251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.288515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.288527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.288605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.288617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.288763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.288776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.288971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.288984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.289128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.289142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.289292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.289306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.289464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.289478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.289669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.289682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.289830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.289842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.529 qpair failed and we were unable to recover it. 00:36:04.529 [2024-12-16 06:04:38.290065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.529 [2024-12-16 06:04:38.290078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.290244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.290257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.290457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.290470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.290690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.290703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.290948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.290960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.291052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.291063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.291264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.291277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.291420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.291432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.291592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.291605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.291825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.291837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.291995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.292008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.292140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.292152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.292240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.292252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.292401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.292413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.292626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.292639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.292809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.292822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.293021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.293034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.293232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.293244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.293442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.293455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.293540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.293550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.293685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.293698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.293897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.293912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.294106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.294118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.294278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.294290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.294437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.294450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.294546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.294558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.530 [2024-12-16 06:04:38.294705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.530 [2024-12-16 06:04:38.294717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.530 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.294872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.294884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.295034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.295047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.295132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.295143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.295212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.295223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.295354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.295365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.295530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.295543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.295799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.295812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.296015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.296028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.296183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.296196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.296305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.296321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.296460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.296472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.296612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.296625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.296774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.296785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.297006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.297020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.297178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.297191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.297414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.297427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.297577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.297589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.297805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.297817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.298040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.298053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.298310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.298322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.298462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.298474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.298554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.298564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.298634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.298646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.298717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.298728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.298924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.298937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.299134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.299147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.299240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.299253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.299421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.299433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.299595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.299608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.299748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.299761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.299927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.299940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.300021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.300034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.300174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.300186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.300256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.300267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.300478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.300491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.300709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.300722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.531 [2024-12-16 06:04:38.300876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.531 [2024-12-16 06:04:38.300889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.531 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.301099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.301113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.301195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.301206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.301349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.301361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.301562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.301574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.301806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.301820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.301967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.301979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.302122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.302136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.302303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.302315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.302455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.302466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.302535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.302546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.302698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.302710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.302949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.302962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.303157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.303172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.303301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.303313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.303556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.303569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.303781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.303794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.303991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.304004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.304236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.304249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.304378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.304392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.304529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.304541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.304693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.304706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.304907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.304920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.305089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.305102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.305182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.305193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.305361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.305374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.305520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.305533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.305626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.305638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.305777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.305789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.305969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.305983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.306239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.306252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.306383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.306397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.306526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.306539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.306690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.306703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.306841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.306859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.307074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.307086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.307283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.307295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.307428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.307440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.307530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.532 [2024-12-16 06:04:38.307541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.532 qpair failed and we were unable to recover it. 00:36:04.532 [2024-12-16 06:04:38.307699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.307711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.307834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.307859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.307932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.307944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.308037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.308047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.308268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.308282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.308431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.308443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.308601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.308613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.308689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.308701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.308866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.308879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.309017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.309030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.309211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.309223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.309378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.309392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.309532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.309544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.309711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.309723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.309918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.309933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.310067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.310081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.310296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.310309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.310408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.310418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.310564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.310576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.310776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.310789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.533 [2024-12-16 06:04:38.310877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.533 [2024-12-16 06:04:38.310888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.533 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.311104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.311117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.311317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.311329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.311475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.311488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.311555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.311566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.311656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.311668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.311798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.311812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.312027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.312040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.312262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.312274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.312469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.312482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.312652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.312664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.312811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.312823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.313044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.313057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.313254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.313266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.313347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.313360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.313446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.313457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.313650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.313663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.313893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.313906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.314131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.314144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.314334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.314347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.314542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.314555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.314706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.314720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.314863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.314876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.315081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.315094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.315220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.315232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.315451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.315464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.315686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.315698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.823 [2024-12-16 06:04:38.315898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.823 [2024-12-16 06:04:38.315912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.823 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.316107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.316119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.316208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.316219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.316348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.316362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.316455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.316467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.316598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.316611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.316778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.316790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.316919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.316932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.317151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.317164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.317335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.317348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.317438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.317449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.317587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.317600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.317744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.317756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.317839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.317855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.318023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.318035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.318242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.318255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.318328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.318339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.318581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.318594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.318742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.318754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.318884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.318897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.319162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.319174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.319317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.319330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.319471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.319483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.319677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.319691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.319861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.319874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.320012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.320024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.320160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.320172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.320263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.320275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.320362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.320374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.320619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.320633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.320710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.320721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.320885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.320898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.321039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.321052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.321244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.321256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.321326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.321339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.321482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.321495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.321754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.321767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.321986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.322000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.322206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.824 [2024-12-16 06:04:38.322219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.824 qpair failed and we were unable to recover it. 00:36:04.824 [2024-12-16 06:04:38.322387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.322400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.322573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.322585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.322649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.322661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.322744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.322755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.322845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.322863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.323030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.323042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.323115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.323125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.323202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.323213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.323414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.323427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.323562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.323574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.323816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.323828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.323933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.323946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.324143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.324156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.324241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.324251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.324446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.324458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.324607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.324619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.324706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.324718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.324792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.324804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.324952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.324966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.325185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.325197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.325344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.325357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.325432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.325443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.325679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.325692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.325767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.325778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.325841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.325857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.326083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.326096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.326244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.326257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.326324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.326337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.326485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.326497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.326734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.326747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.326913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.326926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.327018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.327030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.327107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.327118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.327341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.327354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.327500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.327513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.327753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.327768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.825 [2024-12-16 06:04:38.327968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.825 [2024-12-16 06:04:38.327980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.825 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.328143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.328155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.328225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.328235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.328427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.328439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.328508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.328519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.328681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.328693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.328824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.328836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.328973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.328986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.329213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.329226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.329400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.329412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.329637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.329650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.329801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.329814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.329968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.329982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.330132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.330145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.330279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.330292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.330457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.330469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.330715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.330727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.330941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.330954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.331161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.331173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.331315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.331328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.331525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.331538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.331666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.331678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.331875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.331889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.332046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.332059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.332146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.332158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.332301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.332313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.332550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.332563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.332804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.332816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.332985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.332999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.333154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.333166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.333309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.333322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.333455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.333468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.333666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.333680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.333829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.333841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.333949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.826 [2024-12-16 06:04:38.333963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.826 qpair failed and we were unable to recover it. 00:36:04.826 [2024-12-16 06:04:38.334104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.334116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.334335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.334349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.334435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.334447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.334524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.334536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.334732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.334747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.334936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.334949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.335149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.335162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.335301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.335314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.335511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.335522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.335719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.335731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.335864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.335877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.336119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.336133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.336308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.336321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.336463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.336475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.336695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.336709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.336876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.336889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.337087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.337099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.337321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.337334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.337413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.337424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.337575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.337588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.337741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.337753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.337899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.337911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.338056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.338070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.338163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.338176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.338400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.338414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.338655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.338669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.338811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.338824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.338956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.338969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.339133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.339146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.339318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.339330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.339561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.339574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.339664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.339677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.827 [2024-12-16 06:04:38.339819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.827 [2024-12-16 06:04:38.339832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.827 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.339989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.340003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.340135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.340148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.340296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.340308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.340532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.340544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.340672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.340685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.340762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.340773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.340917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.340929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.341024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.341036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.341171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.341185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.341411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.341425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.341648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.341660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.341725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.341739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.341958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.341971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.342208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.342220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.342440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.342453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.342527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.342538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.342690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.342703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.342863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.342876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.342980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.342991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.343086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.343099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.343296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.343309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.343384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.343395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.343590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.343602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.343735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.343748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.343936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.343949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.344092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.344104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.344243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.344256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.344388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.344400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.344557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.344569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.344656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.344666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.344815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.344828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.345003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.345016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.345214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.345227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.345365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.345377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.828 [2024-12-16 06:04:38.345597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.828 [2024-12-16 06:04:38.345610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.828 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.345689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.345700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.345831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.345843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.345913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.345924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.346134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.346148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.346315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.346326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.346416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.346430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.346655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.346667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.346741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.346752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.346912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.346926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.347123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.347136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.347245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.347257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.347412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.347425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.347592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.347604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.347750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.347763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.347864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.347875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.348057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.348070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.348295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.348309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.348439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.348450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.348606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.348619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.348817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.348830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.349041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.349054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.349200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.349213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.349376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.349389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.349594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.349606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.349758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.349771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.349857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.349869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.350095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.350109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.350267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.350279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.350432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.350445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.350610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.350623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.350776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.350788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.350989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.351002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.351219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.351231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.351385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.351397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.351631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.351643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.351815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.829 [2024-12-16 06:04:38.351828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.829 qpair failed and we were unable to recover it. 00:36:04.829 [2024-12-16 06:04:38.351996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.352008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.352157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.352169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.352259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.352270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.352466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.352478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.352553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.352564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.352644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.352655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.352828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.352841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.352996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.353009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.353160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.353173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.353249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.353260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.353319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.353330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.353513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.353525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.353768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.353780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.353952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.353965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.354184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.354197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.354282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.354298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.354429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.354441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.354581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.354593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.354788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.354800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.354940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.354953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.355097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.355112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.355304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.355316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.355486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.355498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.355637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.355649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.355865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.355879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.355959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.355970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.356068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.356080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.356210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.356223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.356439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.356451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.356695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.356707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.356870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.356883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.357054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.357067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.357141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.357152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.357286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.357297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.357544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.357556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.357700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.357713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.357933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.830 [2024-12-16 06:04:38.357945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.830 qpair failed and we were unable to recover it. 00:36:04.830 [2024-12-16 06:04:38.358141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.358154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.358278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.358290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.358430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.358443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.358605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.358617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.358836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.358853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.358985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.358997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.359068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.359079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.359209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.359221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.359353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.359364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.359454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.359466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.359669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.359699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.359880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.359919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.360137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.360156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.360416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.360434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.360676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.360694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.360952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.360971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.361125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.361143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.361302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.361319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.361524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.361542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.361754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.361769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.361854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.361866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.362013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.362025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.362088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.362100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.362181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.362194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.362329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.362341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.362474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.362486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.362681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.362694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.362916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.362930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.363123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.363135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.363287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.363300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.363431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.363443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.363665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.363677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.363924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.363936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.364076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.364088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.364315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.364328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.364547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.364560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.364806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.364818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.831 qpair failed and we were unable to recover it. 00:36:04.831 [2024-12-16 06:04:38.364972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.831 [2024-12-16 06:04:38.364985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.365198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.365210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.365431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.365444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.365640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.365652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.365744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.365755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.365855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.365868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.366052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.366064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.366215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.366228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.366373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.366386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.366599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.366611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.366757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.366770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.366937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.366950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.367167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.367179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.367383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.367395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.367591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.367604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.367853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.367866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.368035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.368048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.368217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.368230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.368342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.368354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.368486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.368499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.368721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.368733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.368865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.368878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.369017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.369029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.369107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.369118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.369266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.369279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.369349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.369360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.369568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.369582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.369674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.369686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.369903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.369916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.370131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.370144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.370323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.370335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.370499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.370511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.832 [2024-12-16 06:04:38.370706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.832 [2024-12-16 06:04:38.370719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.832 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.370935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.370948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.371092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.371104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.371271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.371284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.371490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.371503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.371644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.371656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.371886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.371899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.372044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.372056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.372209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.372221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.372444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.372457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.372597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.372610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.372738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.372751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.372835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.372852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.373036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.373048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.373242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.373254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.373482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.373494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.373641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.373653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.373908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.373922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.374028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.374040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.374183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.374195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.374342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.374354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.374597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.374636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.374807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.374826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.375059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.375078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.375248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.375265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.375493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.375511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.375740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.375757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.375914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.375928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.376147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.376160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.376253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.376264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.376413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.376426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.376650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.376663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.376874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.376887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.377035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.377048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.377216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.377231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.377429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.377441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.377594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.833 [2024-12-16 06:04:38.377606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.833 qpair failed and we were unable to recover it. 00:36:04.833 [2024-12-16 06:04:38.377698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.377709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.377901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.377914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.378126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.378139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.378283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.378295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.378444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.378456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.378680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.378693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.378932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.378945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.379106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.379118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.379339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.379351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.379556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.379569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.379708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.379721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.379962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.379975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.380069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.380080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.380300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.380313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.380457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.380469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.380595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.380608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.380738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.380750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.380893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.380906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.381053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.381065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.381334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.381346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.381539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.381551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.381709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.381722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.381940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.381954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.382165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.382178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.382459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.382482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.382687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.382705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.382910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.382929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.383136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.383154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.383389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.383406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.383564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.383583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.383862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.383877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.384104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.384116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.384380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.384393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.384538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.384550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.384698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.384710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.384891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.384903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.385044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.385057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.834 qpair failed and we were unable to recover it. 00:36:04.834 [2024-12-16 06:04:38.385199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.834 [2024-12-16 06:04:38.385214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.385413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.385425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.385588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.385601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.385690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.385702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.385904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.385917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.386114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.386126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.386273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.386286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.386420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.386433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.386517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.386528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.386693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.386705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.386935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.386948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.387040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.387051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.387182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.387194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.387391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.387404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.387481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.387492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.387561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.387573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.387729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.387741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.387935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.387947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.388184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.388196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.388393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.388406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.388581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.388594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.388774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.388786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.388932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.388946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.389083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.389095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.389185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.389196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.389269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.389280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.389419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.389430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.389662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.389674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.389898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.389911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.390068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.390081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.390298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.390311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.390531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.390544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.390741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.390753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.390975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.390987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.391065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.391076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.391215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.391226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.391445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.391457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.391706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.835 [2024-12-16 06:04:38.391719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.835 qpair failed and we were unable to recover it. 00:36:04.835 [2024-12-16 06:04:38.391957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.391970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.392098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.392110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.392209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.392220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.392419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.392431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.392566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.392578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.392710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.392720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.392913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.392923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.393058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.393067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.393223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.393233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.393453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.393462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.393654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.393663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.393741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.393750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.393881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.393891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.394018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.394027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.394221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.394230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.394366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.394376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.394457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.394467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.394681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.394691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.394884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.394894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.395088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.395097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.395292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.395302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.395373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.395382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.395523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.395532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.395683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.395692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.395862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.395874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.396038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.396049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.396191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.396202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.396361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.396371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.396573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.396583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.396819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.396831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.397034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.397045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.397218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.397230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.397379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.397389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.397595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.397608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.397839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.397856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.398014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.398025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.398102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.398113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.398261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.836 [2024-12-16 06:04:38.398272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.836 qpair failed and we were unable to recover it. 00:36:04.836 [2024-12-16 06:04:38.398488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.398499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.398583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.398594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.398809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.398821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.398918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.398929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.399143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.399154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.399305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.399318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.399456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.399467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.399555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.399566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.399765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.399776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.399996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.400008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.400095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.400107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.400254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.400265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.400422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.400434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.400591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.400603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.400752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.400764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.400928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.400941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.401098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.401110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.401306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.401319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.401463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.401476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.401615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.401628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.401842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.401859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.402060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.402073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.402151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.402163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.402307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.402320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.402515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.402527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.402607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.402619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.402769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.402782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.402931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.402944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.403072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.403083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.403182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.403194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.403335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.403347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.403559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.403573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.403775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.403788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.837 qpair failed and we were unable to recover it. 00:36:04.837 [2024-12-16 06:04:38.404009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.837 [2024-12-16 06:04:38.404022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.404215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.404228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.404321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.404333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.404550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.404563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.404727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.404740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.404904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.404918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.405121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.405133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.405226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.405239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.405396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.405408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.405497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.405509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.405643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.405654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.405803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.405816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.406010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.406023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.406162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.406175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.406391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.406404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.406600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.406612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.406791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.406804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.406937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.406951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.407043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.407055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.407245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.407258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.407335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.407347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.407541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.407553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.407775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.407789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.407918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.407931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.408090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.408102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.408268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.408282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.408360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.408370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.408589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.408602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.408823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.408836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.408995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.409008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.409161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.409173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.409246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.409257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.409382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.409395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.409556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.409569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.409705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.409718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.409788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.409800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.410045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.410057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.838 qpair failed and we were unable to recover it. 00:36:04.838 [2024-12-16 06:04:38.410216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.838 [2024-12-16 06:04:38.410229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.410422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.410436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.410516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.410527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.410673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.410686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.410949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.410962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.411158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.411170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.411299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.411312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.411442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.411455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.411606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.411618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.411832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.411845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.412065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.412079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.412227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.412239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.412401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.412413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.412502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.412515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.412730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.412742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.412815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.412827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.412910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.412923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.413065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.413077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.413297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.413309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.413478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.413490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.413692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.413705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.413927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.413940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.414067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.414081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.414230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.414243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.414460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.414474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.414691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.414704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.414855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.414868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.414949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.414960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.415090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.415103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.415259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.415272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.415479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.415491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.415659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.415671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.415808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.415820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.415893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.415905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.416092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.416104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.416183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.416196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.416349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.416361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.416491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.416504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.416643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.416655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.839 qpair failed and we were unable to recover it. 00:36:04.839 [2024-12-16 06:04:38.416882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.839 [2024-12-16 06:04:38.416896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.417044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.417056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.417286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.417300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.417455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.417468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.417599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.417612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.417821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.417833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.417980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.417993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.418140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.418153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.418361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.418374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.418598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.418611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.418763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.418775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.418924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.418937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.419179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.419191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.419392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.419405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.419540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.419552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.419777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.419790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.419882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.419894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.420059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.420072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.420222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.420235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.420364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.420377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.420512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.420524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.420752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.420766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.420855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.420867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.420968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.420980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.421159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.421171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.421302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.421315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.421393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.421406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.421624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.421636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.421839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.421856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.421999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.422012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.422143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.422156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.422382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.422394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.422627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.422638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.422785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.422799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.422942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.422954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.423100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.423113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.423334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.423346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.423543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.423555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.840 [2024-12-16 06:04:38.423793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.840 [2024-12-16 06:04:38.423805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.840 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.424024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.424037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.424250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.424263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.424419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.424431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.424604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.424618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.424706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.424717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.424794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.424805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.424946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.424959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.425090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.425103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.425318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.425330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.425419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.425430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.425502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.425514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.425742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.425754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.425949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.425962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.426110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.426124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.426266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.426280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.426420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.426432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.426537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.426550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.426686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.426699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.426896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.426909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.427130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.427144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.427310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.427324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.427395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.427406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.427479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.427490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.427642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.427655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.427749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.427759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.427964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.427978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.428199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.428211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.428391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.428403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.428612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.428626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.428710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.428720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.428918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.428931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.429091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.429103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.841 [2024-12-16 06:04:38.429246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.841 [2024-12-16 06:04:38.429258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.841 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.429384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.429397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.429465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.429476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.429561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.429572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.429660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.429672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.429864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.429877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.429971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.429983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.430189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.430202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.430285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.430296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.430440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.430453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.430594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.430607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.430813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.430833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.430935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.430946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.431142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.431154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.431346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.431358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.431488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.431501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.431629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.431642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.431783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.431795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.431937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.431950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.432090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.432103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.432343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.432356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.432484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.432497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.432710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.432723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.432857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.432871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.432959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.432970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.433122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.433134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.433279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.433292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.433423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.433436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.433585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.433598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.433667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.433678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.433782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.433794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.433933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.433945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.434124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.434136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.434211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.434223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.434294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.434306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.434525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.842 [2024-12-16 06:04:38.434538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.842 qpair failed and we were unable to recover it. 00:36:04.842 [2024-12-16 06:04:38.434671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.434684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.434905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.434919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.435056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.435069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.435196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.435209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.435338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.435351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.435512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.435524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.435733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.435746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.435978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.435991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.436090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.436103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.436354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.436366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.436587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.436601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.436671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.436683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.436865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.436878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.437016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.437029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.437184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.437196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.437281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.437294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.437431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.437443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.437581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.437593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.437729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.437741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.437889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.437902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.438043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.438056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.438204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.438216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.438410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.438423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.438565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.438577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.438649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.438660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.438813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.438826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.438992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.439006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.439136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.439148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.439219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.439230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.439375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.439387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.439537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.439550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.439739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.439752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.439990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.440003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.440134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.440148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.440361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.440374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.440593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.440605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.440795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.440808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.440947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.440960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.843 [2024-12-16 06:04:38.441106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.843 [2024-12-16 06:04:38.441120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.843 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.441196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.441208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.441379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.441393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.441566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.441578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.441818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.441830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.441965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.441978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.442138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.442152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.442373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.442385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.442529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.442542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.442680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.442693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.442789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.442802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.442997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.443009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.443154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.443166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.443294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.443306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.443467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.443481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.443634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.443647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.443799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.443812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.443984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.443999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.444205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.444218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.444367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.444383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.444554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.444567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.444709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.444721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.444907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.444921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.445017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.445030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.445189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.445202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.445375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.445388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.445538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.445550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.445694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.445707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.445937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.445950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.446165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.446178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.446360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.446372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.446515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.446528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.446700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.446713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.446861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.446877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.447104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.447114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.447248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.447259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.447487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.447498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.447695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.844 [2024-12-16 06:04:38.447706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.844 qpair failed and we were unable to recover it. 00:36:04.844 [2024-12-16 06:04:38.447857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.447868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.448112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.448123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.448269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.448279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.448428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.448438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.448575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.448586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.448667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.448677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.448825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.448837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.448992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.449002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.449142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.449152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.449314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.449325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.449532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.449544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.449787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.449797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.449877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.449888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.450083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.450093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.450309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.450321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.450531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.450542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.450677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.450688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.450908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.450918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.450999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.451010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.451253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.451266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.451483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.451493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.451658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.451669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.451825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.451836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.452107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.452130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.452389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.452407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.452634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.452652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.452752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.452768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.452975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.452992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.453203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.453219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.453373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.453386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.453541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.453553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.453682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.453692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.453854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.453865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.453955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.453975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.454055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.454066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.454282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.454293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.845 [2024-12-16 06:04:38.454537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.845 [2024-12-16 06:04:38.454548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.845 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.454769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.454782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.454941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.454952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.455097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.455107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.455261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.455272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.455503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.455513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.455767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.455778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.456019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.456031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.456158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.456168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.456337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.456348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.456513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.456524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.456702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.456713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.456930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.456941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.457110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.457122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.457349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.457361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.457505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.457516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.457681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.457693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.457857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.457867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.458037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.458047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.458242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.458252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.458471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.458482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.458638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.458649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.458875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.458887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.459082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.459094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.459233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.459243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.459441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.459452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.459670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.459682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.459811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.459821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.459966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.459979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.460126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.460137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.460228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.460239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.460437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.460448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.460545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.460556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.460632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.846 [2024-12-16 06:04:38.460643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.846 qpair failed and we were unable to recover it. 00:36:04.846 [2024-12-16 06:04:38.460721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.460732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.460871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.460882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.461050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.461061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.461222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.461232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.461363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.461374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.461615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.461626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.461707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.461718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.461795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.461806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.462029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.462040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.462244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.462255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.462345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.462355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.462494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.462505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.462651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.462662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.462858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.462870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.463030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.463041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.463142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.463153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.463362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.463372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.463517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.463528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.463721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.463731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.463929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.463940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.464126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.464136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.464299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.464310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.464453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.464463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.464662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.464673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.464892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.464903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.465912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.465924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.466146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.466157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.466394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.466405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.466545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.466556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.466715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.847 [2024-12-16 06:04:38.466725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.847 qpair failed and we were unable to recover it. 00:36:04.847 [2024-12-16 06:04:38.466899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.466910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.466992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.467003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.467270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.467281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.467424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.467434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.467641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.467652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.467897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.467909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.468070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.468082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.468181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.468191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.468406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.468417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.468610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.468621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.468728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.468738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.468887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.468899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.468978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.468988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.469116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.469127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.469323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.469335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.469463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.469475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.469717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.469728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.469887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.469899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.469993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.470005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.470264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.470291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.470441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.470458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.470650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.470667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.470912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.470930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.471084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.471102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.471334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.471350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.471537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.471554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.471740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.471756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.471994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.472011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.472169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.472186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.472287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.472303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.472400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.472416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.472574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.472591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.472769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.472791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.472992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.473008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.473168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.473184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.473405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.473423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.473638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.473654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.473843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.473865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.848 [2024-12-16 06:04:38.474110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.848 [2024-12-16 06:04:38.474127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.848 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.474305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.474320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.474549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.474565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.474776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.474794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.475041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.475059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.475246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.475262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.475423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.475439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.475604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.475621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.475711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.475729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.475885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.475901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.475985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.476001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.476240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.476257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.476419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.476435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.476531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.476547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.476771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.476785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.476934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.476948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.477116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.477126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.477332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.477342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.477506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.477518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.477647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.477657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.477797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.477807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.478014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.478025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.478224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.478235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.478432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.478443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.478532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.478543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.478635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.478647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.478719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.478730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.478905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.478917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.479120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.479131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.479231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.479241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.479328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.479338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.479486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.479498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.479742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.479754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.479967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.479978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.480184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.480195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.480352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.849 [2024-12-16 06:04:38.480364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.849 qpair failed and we were unable to recover it. 00:36:04.849 [2024-12-16 06:04:38.480519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.480531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.480662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.480672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.480815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.480825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.480968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.480980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.481172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.481183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.481377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.481388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.481529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.481540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.481784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.481794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.482013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.482024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.482193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.482204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.482287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.482297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.482432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.482443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.482641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.482652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.482802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.482812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.483058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.483070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.483170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.483180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.483320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.483330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.483478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.483489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.483633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.483644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.483728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.483738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.483889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.483900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.484048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.484059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.484184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.484196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.484353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.484364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.484505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.484519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.484676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.484688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.484855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.484866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.485017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.485027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.485221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.485233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.485476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.485486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.485682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.485693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.485889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.485900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.486123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.486134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.486233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.486245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.486348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.486358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.486522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.486532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.486693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.486703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.486853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.486864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.486973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.850 [2024-12-16 06:04:38.486983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.850 qpair failed and we were unable to recover it. 00:36:04.850 [2024-12-16 06:04:38.487119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.487130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.487218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.487228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.487300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.487312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.487453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.487463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.487601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.487612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.487842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.487858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.488063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.488073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.488314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.488325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.488471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.488484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.488570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.488581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.488669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.488679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.488899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.488910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.489130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.489141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.489243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.489253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.489404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.489414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.489559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.489571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.489714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.489725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.489871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.489882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.490030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.490041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.490192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.490202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.490338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.490349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.490520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.490531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.490704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.490715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.490795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.490805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.490869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.490882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.491033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.491046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.491181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.491197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.491335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.491348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.491505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.491517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.491597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.491610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.491753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.851 [2024-12-16 06:04:38.491766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.851 qpair failed and we were unable to recover it. 00:36:04.851 [2024-12-16 06:04:38.491838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.491856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.492016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.492028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.492268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.492280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.492436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.492448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.492705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.492718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.492932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.492946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.493152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.493164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.493390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.493403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.493554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.493566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.493726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.493738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.493815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.493827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.494073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.494086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.494291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.494303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.494473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.494486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.494646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.494658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.494902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.494916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.495064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.495076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.495210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.495222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.495366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.495379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.495475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.495487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.495648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.495660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.495834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.495853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.496027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.496039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.496172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.496186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.496290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.496302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.496610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.496621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.496712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.496724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.496873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.496886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.497041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.497053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.497257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.497269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.497421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.497433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.497517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.497530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.497790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.497803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.497954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.497967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.498176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.498190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.498394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.852 [2024-12-16 06:04:38.498408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.852 qpair failed and we were unable to recover it. 00:36:04.852 [2024-12-16 06:04:38.498602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.498614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.498772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.498785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.498932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.498946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.499224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.499236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.499485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.499497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.499706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.499719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.499893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.499906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.500004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.500016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.500241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.500254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.500469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.500481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.500750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.500763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.500982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.500995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.501197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.501210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.501441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.501453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.501609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.501622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.501817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.501829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.502040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.502054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.502201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.502214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.502467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.502479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.502698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.502711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.502809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.502822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.503084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.503097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.503243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.503255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.503432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.503446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.503584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.503597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.503679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.503691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.503875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.503888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.504033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.504045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.504210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.504221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.504305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.504318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.504462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.504475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.504581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.504593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.504818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.504830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.504979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.504992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.505144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.505157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.505259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.505272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.505448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.505460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.505560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.853 [2024-12-16 06:04:38.505573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.853 qpair failed and we were unable to recover it. 00:36:04.853 [2024-12-16 06:04:38.505667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.505679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.505833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.505855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.505940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.505953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.506108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.506120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.506270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.506284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.506386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.506399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.506562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.506575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.506651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.506663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.506760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.506772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.506954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.506968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.507062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.507075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.507216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.507228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.507363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.507376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.507515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.507527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.507667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.507679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.507837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.507856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.507948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.507960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.508959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.508972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.509115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.509128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.509294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.509306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.509396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.509408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.509545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.509557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.509646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.509658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.509793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.509806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.509891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.509905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.510048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.510061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.510132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.510144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.510307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.510320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.510406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.854 [2024-12-16 06:04:38.510418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.854 qpair failed and we were unable to recover it. 00:36:04.854 [2024-12-16 06:04:38.510505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.510517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.510658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.510670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.510820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.510833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.510981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.511093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.511370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.511544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.511648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.511748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.511841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.511956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.511972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.512055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.512071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.512146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.512162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.512376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.512394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.512470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.512486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.512647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.512662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.512893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.512910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.513003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.513019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.513112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.513129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.513301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.513318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.513471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.513488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.513709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.513724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.513810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.513826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.514048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.514064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.514154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.514170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.514272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.514287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.514543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.514558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.514656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.514672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.514766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.514782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.514939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.514956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.855 [2024-12-16 06:04:38.515116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.855 [2024-12-16 06:04:38.515133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.855 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.515290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.515306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.515413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.515428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.515638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.515654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.515857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.515874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.515981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.515997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.516225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.516241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.516341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.516356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.516472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.516487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.516701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.516718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.516890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.516907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.516980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.516996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.517144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.517159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.517258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.517273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.517374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.517393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.517491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.517506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.517644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.517659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.517766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.517781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.518010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.518025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.518132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.518148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.518382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.518398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.518483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.518499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.518733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.518749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.518839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.518859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.519973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.519989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.520134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.520150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.520249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.520265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.520341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.520356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.520446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.520462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.520600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.520615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.856 qpair failed and we were unable to recover it. 00:36:04.856 [2024-12-16 06:04:38.520774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.856 [2024-12-16 06:04:38.520790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.520935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.520953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.521975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.521992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.522084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.522099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.522188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.522205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.522352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.522367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.522532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.522548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.522691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.522707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.522797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.522812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.522957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.522974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.523122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.523141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.523290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.523306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.523451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.523467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.523602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.523618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.523714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.523730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.523840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.523861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.524078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.524095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.524190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.524206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.524299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.524314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.524460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.524476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.524620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.524636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.524798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.524813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.524923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.524939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.525084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.525101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.525191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.525206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.525419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.525436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.525538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.525553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.525695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.525711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.525802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.857 [2024-12-16 06:04:38.525817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.857 qpair failed and we were unable to recover it. 00:36:04.857 [2024-12-16 06:04:38.526028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.526190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.526309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.526430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.526605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.526716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.526818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.526919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.526934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.527086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.527106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.527256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.527269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.527427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.527439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.527516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.527528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.527667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.527680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.527818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.527830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.528045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.528063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.528171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.528186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.528413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.528429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.528516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.528531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.528633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.528648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.528739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.528755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.528906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.528924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.529082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.529104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.529175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.529190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.529254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.529269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.529409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.529425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.529529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.529544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.529765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.529782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.529921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.529937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.530045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.530153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.530259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.530352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.530544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.530649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.858 [2024-12-16 06:04:38.530888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.858 qpair failed and we were unable to recover it. 00:36:04.858 [2024-12-16 06:04:38.530966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.530982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.531090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.531107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.531189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.531204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.531310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.531326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.531477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.531494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.531726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.531742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.531892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.531908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.532065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.532080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.532172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.532188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.532287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.532303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.532384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.532401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.532493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.532508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.532669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.532686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.532852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.532877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.533026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.533043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.533231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.533247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.533392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.533407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.533479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.533495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.533704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.533720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.533876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.533893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.533983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.534141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.534245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.534337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.534454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.534631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.534790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.534954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.534971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.535050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.535066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.535218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.535234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.535377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.535392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.535487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.535503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.535593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.535608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.535689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.535706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.535952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.535968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.536140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.536155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.536399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.536415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.536700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.536716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.859 qpair failed and we were unable to recover it. 00:36:04.859 [2024-12-16 06:04:38.536870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.859 [2024-12-16 06:04:38.536887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.537050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.537066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.537232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.537248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.537338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.537353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.537604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.537619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.537761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.537779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.537898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.537915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.538114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.538132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.538306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.538322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.538483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.538499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.538674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.538690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.538843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.538866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.539019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.539035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.539197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.539213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.539314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.539331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.539552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.539584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.539755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.539774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.539867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.539885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.539967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.539983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.540086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.540102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.540208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.540224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.540429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.540447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.540547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.540564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.540830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.540852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.541054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.541069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.541293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.541310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.541485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.541501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.541661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.541677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.541880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.541899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.542052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.542069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.542167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.542183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.542325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.542341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.542527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.542544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.542633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.860 [2024-12-16 06:04:38.542649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.860 qpair failed and we were unable to recover it. 00:36:04.860 [2024-12-16 06:04:38.542851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.542867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.542979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.542996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.543106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.543122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.543348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.543364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.543537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.543552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.543767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.543783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.543967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.543984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.544226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.544242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.544384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.544400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.544563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.544579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.544740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.544755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.544992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.545009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.545104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.545120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.545277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.545293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.545396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.545411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.545601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.545617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.545697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.545712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.545858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.545876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.546052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.546068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.546247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.546263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.546344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.546360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.546471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.546492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.546758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.546776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.546978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.546996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.547233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.547251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.547477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.547493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.547675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.547692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.547993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.548012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.548162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.548178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.548335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.548351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.548447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.548463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.548643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.548659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.548816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.548833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.549037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.549053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.549221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.549239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.549398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.549414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.861 [2024-12-16 06:04:38.549703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.861 [2024-12-16 06:04:38.549720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.861 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.549979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.549996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.550229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.550246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.550348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.550364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.550602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.550618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.550704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.550721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.550886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.550904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.551057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.551073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.551228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.551245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.551393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.551409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.551512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.551527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.551749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.551765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.551950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.551970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.552079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.552094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.552246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.552262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.552363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.552380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.552547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.552563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.552706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.552724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.552895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.552913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.553002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.553019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.553181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.553198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.553417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.553433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.553661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.553677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.553856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.553875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.554084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.554100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.554330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.554346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.554451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.554467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.554691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.554707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.554944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.554961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.555110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.555125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.555296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.555312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.555588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.555604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.555755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.555772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.555939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.555955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.556162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.556178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.556383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.556399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.556585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.556601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.556747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.556762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.862 qpair failed and we were unable to recover it. 00:36:04.862 [2024-12-16 06:04:38.556952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.862 [2024-12-16 06:04:38.556968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.557127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.557143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.557347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.557363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.557518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.557534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.557679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.557695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.557770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.557785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.558044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.558060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.558168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.558184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.558345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.558360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.558500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.558516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.558657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.558673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.558939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.558955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.559113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.559128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.559227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.559242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.559494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.559510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.559667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.559687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.559853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.559870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.559957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.559973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.560202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.560218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.560423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.560439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.560611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.560626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.560774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.560789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.561019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.561036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.561194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.561209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.561456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.561472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.561651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.561666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.561854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.561871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.561971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.561986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.562069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.562085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.562191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.562206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.562356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.562372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.562608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.562623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.562783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.562799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.562902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.562918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.563002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.563017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.563234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.563250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.563338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.563353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.563569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.563584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.563760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.563776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.863 [2024-12-16 06:04:38.563931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.863 [2024-12-16 06:04:38.563948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.863 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.564097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.564112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.564319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.564334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.564611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.564627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.564719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.564735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.564948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.564965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.565202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.565218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.565305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.565320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.565545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.565560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.565729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.565744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.565972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.565988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.566243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.566258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.566411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.566426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.566614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.566630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.566814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.566829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.567073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.567089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.567325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.567344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.567491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.567506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.567660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.567675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.567838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.567859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.568068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.568084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.568238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.568253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.568355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.568371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.568523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.568538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.568706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.568721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.568810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.568826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.569029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.569046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.569249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.569264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.569454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.569470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.569611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.569627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.569867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.569883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.570044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.570059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.570287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.570303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.570441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.570456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.570669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.570685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.570917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.570933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.571102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.571117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.864 [2024-12-16 06:04:38.571342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.864 [2024-12-16 06:04:38.571357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.864 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.571529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.571544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.571704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.571720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.571915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.571931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.572107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.572123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.572323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.572338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.572437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.572453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.572611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.572626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.572777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.572793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.573001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.573017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.573247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.573262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.573428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.573444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.573532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.573547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.573691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.573707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.573961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.573977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.574184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.574200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.574434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.574449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.574604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.574620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.574843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.574866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.574967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.574989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.575154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.575170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.575400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.575416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.575590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.575606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.575810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.575825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.576090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.576106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.576204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.576220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.576441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.576456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.576633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.576649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.576866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.576882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.577035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.577050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.577261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.577276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.577435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.577451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.577623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.577639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.577781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.577797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.578059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.578076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.865 [2024-12-16 06:04:38.578251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.865 [2024-12-16 06:04:38.578266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.865 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.578509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.578525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.578692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.578707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.578968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.578984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.579144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.579159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.579312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.579327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.579486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.579501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.579683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.579698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.579879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.579895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.580114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.580130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.580291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.580306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.580565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.580581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.580788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.580803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.581057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.581073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.581226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.581242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.581449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.581465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.581705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.581720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.581954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.581970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.582114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.582129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.582295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.582310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.582529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.582544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.582700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.582715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.582872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.582888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.583094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.583110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.583208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.583227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.583386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.583401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.583657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.583672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.583901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.583918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.584150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.584166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.584314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.584330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.584574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.584589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.584809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.584824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.585035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.585051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.585278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.585294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.585465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.585480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.585710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.585726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.585881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.585898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.586124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.586139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.586386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.586401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.586576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.586591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.586796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.866 [2024-12-16 06:04:38.586812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.866 qpair failed and we were unable to recover it. 00:36:04.866 [2024-12-16 06:04:38.587001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.587017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.587241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.587256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.587367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.587382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.587532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.587548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.587780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.587796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.587956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.587972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.588200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.588215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.588419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.588434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.588582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.588597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.588775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.588791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.589029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.589045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.589288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.589304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.589531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.589547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.589775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.589791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.589945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.589961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.590110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.590126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.590354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.590370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.590577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.590592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.590845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.590864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.591117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.591133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.591355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.591371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.591529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.591544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.591750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.591766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.591926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.591945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.592103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.592119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.592258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.592274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.592502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.592517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.592751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.592767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.593003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.593019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.593254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.593269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.593428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.593443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.593621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.593637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.593790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.593806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.593950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.593967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.594190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.594205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.594373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.594388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.594608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.594623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.594872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.594889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.595056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.595071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.595363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.595378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.595549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.595565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.867 [2024-12-16 06:04:38.595718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.867 [2024-12-16 06:04:38.595733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.867 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.595937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.595954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.596183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.596199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.596397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.596412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.596516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.596531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.596621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.596637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.596862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.596878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.597134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.597149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.597299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.597314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.597455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.597471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.597702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.597717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.597891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.597907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.598082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.598097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.598327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.598342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.598495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.598511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.598660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.598675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.598900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.598916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.598988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.599004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.599231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.599246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.599454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.599470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.599638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.599653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.599860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.599876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.600083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.600102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.600337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.600352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.600503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.600519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.600607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.600623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.600830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.600845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.601077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.601092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.601264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.601279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.601527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.601542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.601718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.601734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.601974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.601991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.602131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.602146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.602398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.602414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.602575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.602590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.602818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.602834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.603045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.603061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.603225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.603241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.603466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.603482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.603709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.603725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.603865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.603881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.868 qpair failed and we were unable to recover it. 00:36:04.868 [2024-12-16 06:04:38.604095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.868 [2024-12-16 06:04:38.604111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.604259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.604274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.604488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.604504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.604723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.604738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.605010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.605026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.605125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.605140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.605370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.605385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.605611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.605627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.605889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.605906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.606133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.606149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.606299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.606315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.606499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.606514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.606767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.606782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.606922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.606938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.607078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.607094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.607247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.607262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.607515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.607531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.607689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.607705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.607990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.608006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.608269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.608285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.608526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.608541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.608763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.608781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.608954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.608970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.609110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.609125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.609283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.609298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.609502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.609518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.609660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.609676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.609896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.609913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.610131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.610146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.610301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.610317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.610528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.610544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.610815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.610830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.611064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.611080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.611308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.611323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.611544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.611559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.611650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.611665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.611824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.611839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.612020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.612036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.612244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.869 [2024-12-16 06:04:38.612259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.869 qpair failed and we were unable to recover it. 00:36:04.869 [2024-12-16 06:04:38.612514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.612529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.612734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.612749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.612905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.612921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.613097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.613113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.613319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.613335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.613420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.613435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.613643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.613659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.613751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.613767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.613974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.613990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.614149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.614165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.614419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.614434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.614689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.614704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.614863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.614879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.615107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.615122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.615268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.615284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.615493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.615508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.615732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.615748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.615892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.615908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.616124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.616140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.616360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.616376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.616548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.616564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.616789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.616805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.617045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.617066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.617301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.617317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.617526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.617541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.617687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.617702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.870 [2024-12-16 06:04:38.617859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.870 [2024-12-16 06:04:38.617875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.870 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.617973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.617988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.618221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.618237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.618427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.618442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.618667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.618683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.618843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.618874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.618969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.618984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.619146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.619162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.619347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.619363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.619567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.619582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.619736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.619751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.619974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.619990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.620141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.620156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.620368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.620383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.620567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.620583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.620790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.620805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.620966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.620983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.621198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.621213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.621320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.621336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.621589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.621604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.621851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.621867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.622030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.622045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.622121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.622136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.622350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.622366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.622546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.622561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.622765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.622780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.622961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.622978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.623141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.623156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.623366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.623382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.623578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.623593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.623791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.623806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.624039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.624055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.624285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.624301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.624443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.624459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.624612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.624627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.624858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.624874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.625017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.625036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.625243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.625259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.625426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.625441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.625537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.625553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.625782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.625798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.625949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.625965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.626122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.871 [2024-12-16 06:04:38.626137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.871 qpair failed and we were unable to recover it. 00:36:04.871 [2024-12-16 06:04:38.626353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.626368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.626506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.626522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.626660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.626675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.626900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.626916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.627074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.627089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.627299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.627314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.627462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.627477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.627700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.627716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.627926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.627942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.628171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.628187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.628458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.628473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.628707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.628723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.628877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.628892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.629123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.629138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.629357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.629373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.629532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.629547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.629774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.629790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.630013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.630029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.630264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.630279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.630517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.630532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.630784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.630805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.631022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.631036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.631207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.631219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.631445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.631458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.631614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.631626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.631775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.631787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.631942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.631955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.632099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.632111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.632281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.632292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.632511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.632523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.632674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.632686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.632890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.632903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.633053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.633065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.633216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.633231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.633328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.633340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.633493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.633505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.633669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.633681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.633856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.633868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.634125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.634137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.634306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.634318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.634486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.634498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.634649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.634661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.872 qpair failed and we were unable to recover it. 00:36:04.872 [2024-12-16 06:04:38.634809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.872 [2024-12-16 06:04:38.634821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.634927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.634939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.635112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.635124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.635277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.635289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.635519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.635531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.635710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.635722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.636001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.636014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.636225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.636237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.636407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.636419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.636629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.636641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.636855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.636867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.636968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.636980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.637064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.637076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.637256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.637268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.637498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.637510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.637723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.637736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.637907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.637920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.638074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.638086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.638270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.638295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.638533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.638550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.638706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.638722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.638898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.638915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.639012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.639028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.639178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.639193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.639344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.639359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.639612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.639629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.639882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.639899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.639997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.640012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.640268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.640284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.640443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.640459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.640686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.640702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.640854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.640870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.641109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.641125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.641295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.641311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.641538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.641554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.641653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.641669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.641752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.641768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.641972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.641989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.642221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.873 [2024-12-16 06:04:38.642237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.873 qpair failed and we were unable to recover it. 00:36:04.873 [2024-12-16 06:04:38.642409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.642425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.642578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.642594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.642795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.642810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.643034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.643051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.643281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.643297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.643507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.643523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.643787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.643806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.643954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.643971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.644200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.644215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.644448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.644465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.644694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.644709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.644859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.644875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.645016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.645033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.645136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.645152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.645307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.645322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.645482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.645498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.645720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.645735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.645939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.645956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.646131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.646147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.646353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.646369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.646550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.646566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.646668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.646683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.646904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.646921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.647074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.647090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.647321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.647337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.647505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.647521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.647607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.647622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.647704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.647720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:04.874 [2024-12-16 06:04:38.647949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.874 [2024-12-16 06:04:38.647967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:04.874 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.648129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.648146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.648295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.648311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.648401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.648417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.648577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.648594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.648772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.648790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.648900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.648918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.649076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.649092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.649273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.649290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.649544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.649561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.649838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.649859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.650065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.650081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.650321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.650338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.650512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.650528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.650698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.650713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.650946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.650963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.651213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.651229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.651413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.651429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.651651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.651668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.651952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.651974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.652123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.162 [2024-12-16 06:04:38.652140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.162 qpair failed and we were unable to recover it. 00:36:05.162 [2024-12-16 06:04:38.652354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.652370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.652544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.652560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.652668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.652683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.652837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.652858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.653130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.653147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.653362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.653377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.653610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.653626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.653842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.653862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.654099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.654115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.654216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.654232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.654428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.654445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.654605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.654625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.654724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.654740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.654954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.654970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.655123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.655139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.655298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.655314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.655533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.655549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.655721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.655737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.655901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.655918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.656020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.656036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.656191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.656208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.656374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.656390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.656560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.656576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.656809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.656824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.657000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.657017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.657175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.657196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.657405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.657421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.657541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.657557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.657714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.657730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.657966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.657982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.658189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.658205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.658464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.658481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.658643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.658662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.658806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.658823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.658982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.659000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.163 qpair failed and we were unable to recover it. 00:36:05.163 [2024-12-16 06:04:38.659152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.163 [2024-12-16 06:04:38.659168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.659341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.659358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.659508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.659524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.659730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.659749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.659955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.659971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.660137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.660153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.660352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.660369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.660614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.660630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.660860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.660878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.661034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.661051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.661296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.661313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.661470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.661486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.661627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.661643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.661749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.661765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.661875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.661903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.662057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.662074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.662325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.662341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.662575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.662591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.662749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.662766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.662973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.662990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.663203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.663219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.663369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.663385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.663537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.663552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.663693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.663708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.663871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.663888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.664035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.664051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.664227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.664242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.664333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.664350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.664580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.664595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.664749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.664766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.665035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.665057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.665200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.665214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.665433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.665446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.164 qpair failed and we were unable to recover it. 00:36:05.164 [2024-12-16 06:04:38.665600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.164 [2024-12-16 06:04:38.665613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.665768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.665782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.665881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.665895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.666047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.666059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.666214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.666227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.666375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.666387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.666540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.666553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.666644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.666656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.666810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.666822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.667029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.667043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.667186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.667201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.667345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.667358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.667567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.667579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.667809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.667821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.668027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.668040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.668197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.668208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.668361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.668373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.668522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.668534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.668739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.668751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.668916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.668930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.669156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.669168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.669258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.669272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.669436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.669449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.669528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.669540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.669750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.669762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.669852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.669864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.670074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.670087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.670174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.670186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.670391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.670404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.670578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.670592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.670822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.670835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.671011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.671023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.671250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.671262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.671427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.671439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.671585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.671597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.671754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.165 [2024-12-16 06:04:38.671766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.165 qpair failed and we were unable to recover it. 00:36:05.165 [2024-12-16 06:04:38.671998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.672011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.672194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.672218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.672383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.672399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.672628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.672644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.672735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.672751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.672955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.672974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.673206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.673224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.673381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.673397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.673563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.673579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.673748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.673763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.673969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.673985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.674190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.674207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.674390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.674405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.674618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.674634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.674864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.674884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.674988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.675003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.675191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.675207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.675361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.675377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.675522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.675537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.675709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.675724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.675884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.675900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.676062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.676078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.676336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.676353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.676434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.676450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.676646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.676661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.676823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.676840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.677053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.677070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.677283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.677298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.677388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.677404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.677501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.677517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.677689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.677708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.677867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.677883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.678065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.678081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.678315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.678332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.678482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.166 [2024-12-16 06:04:38.678498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.166 qpair failed and we were unable to recover it. 00:36:05.166 [2024-12-16 06:04:38.678734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.678750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.678934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.678951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.679113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.679130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.679309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.679325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.679544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.679559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.679716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.679732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.679894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.679914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.680004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.680020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.680176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.680191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.680348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.680365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.680600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.680616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.680878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.680895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.681000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.681015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.681167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.681183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.681412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.681428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.681528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.681544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.681762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.681780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.681869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.681885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.682092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.682108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.682267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.682283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.682425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.682441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.682594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.682609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.682740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.682755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.682926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.682943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.683176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.683192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.683351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.683367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.683528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.683545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.683726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.683742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.683923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.683939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.684094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.167 [2024-12-16 06:04:38.684111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.167 qpair failed and we were unable to recover it. 00:36:05.167 [2024-12-16 06:04:38.684253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.684270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.684421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.684437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.684642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.684659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.684834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.684854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.685088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.685104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.685262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.685278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.685490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.685507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.685664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.685680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.685888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.685905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.686064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.686079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.686229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.686246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.686323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.686339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.686548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.686565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.686655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.686673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.686748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.686763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.686928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.686945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.687044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.687063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.687228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.687244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.687451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.687468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.687712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.687730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.687905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.687922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.688152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.688167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.688310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.688325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.688480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.688496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.688656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.688673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.688877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.688895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.689084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.689101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.689293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.689308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.689515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.689532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.689717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.689732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.689832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.168 [2024-12-16 06:04:38.689853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.168 qpair failed and we were unable to recover it. 00:36:05.168 [2024-12-16 06:04:38.690006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.690021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.690209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.690226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.690431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.690446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.690581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.690597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.690672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.690689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.690778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.690794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.691044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.691062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.691297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.691314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.691544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.691560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.691788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.691804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.692059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.692076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.692308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.692325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.692496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.692512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.692719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.692743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.692923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.692939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.693196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.693214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.693380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.693398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.693653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.693670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.693902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.693921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.694171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.694187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.694422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.694439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.694619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.694637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.694799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.694816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.695041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.695058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.695218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.695237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.695423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.695446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.695684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.695700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.695957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.695975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.696120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.696137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.696344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.696359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.696583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.696599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.696741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.696757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.696988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.697004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.169 [2024-12-16 06:04:38.697090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.169 [2024-12-16 06:04:38.697106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.169 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.697338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.697353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.697610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.697627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.697782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.697798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.697964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.697981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.698190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.698206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.698426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.698444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.698538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.698554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.698694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.698711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.698890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.698906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.698984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.698999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.699243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.699261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.699513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.699531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.699735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.699752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.699923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.699939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.700120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.700136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.700361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.700378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.700540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.700556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.700647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.700663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.700817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.700833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.701016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.701038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.701180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.701197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.701435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.701451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.701610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.701626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.701780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.701798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.702029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.702046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.702191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.702207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.702366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.702382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.702543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.702560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.702789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.702804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.702985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.703002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.703210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.703226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.703398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.703418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.703627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.703644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.703873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.703890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.704129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.704146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.170 [2024-12-16 06:04:38.704305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.170 [2024-12-16 06:04:38.704322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.170 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.704529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.704544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.704633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.704650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.704907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.704925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.705160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.705175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.705397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.705414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.705501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.705517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.705729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.705746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.705926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.705943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.706107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.706125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.706298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.706320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.706534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.706553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.706654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.706670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.706875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.706892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.706980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.706996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.707203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.707219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.707310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.707327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.707466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.707483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.707660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.707676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.707775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.707792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.707880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.707897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.708120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.708136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.708284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.708299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.708530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.708550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.708671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.708687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.708857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.708874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.709102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.709118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.709216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.709232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.709331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.709347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.709565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.709581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.709680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.709696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.709877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.709893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.710106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.710124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.710334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.710351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.710510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.710525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.710791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.710809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.711041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.711059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.171 [2024-12-16 06:04:38.711172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.171 [2024-12-16 06:04:38.711189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.171 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.711356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.711372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.711524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.711541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.711636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.711652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.711877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.711893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.712100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.712116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.712392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.712408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.712548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.712564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.712770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.712787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.713023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.713040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.713272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.713287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.713473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.713490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.713721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.713737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.713987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.714004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.714182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.714200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.714450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.714466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.714706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.714722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.714882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.714898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.715051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.715067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.715218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.715234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.715332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.715348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.715438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.715454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.715712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.715731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.715875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.715893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.716056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.716074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.716235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.716252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.716460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.716480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.716710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.716725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.716881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.716898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.717109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.172 [2024-12-16 06:04:38.717125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.172 qpair failed and we were unable to recover it. 00:36:05.172 [2024-12-16 06:04:38.717352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.717369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.717550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.717567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.717726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.717742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.717960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.717977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.718077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.718094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.718315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.718331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.718497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.718514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.718666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.718684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.718774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.718790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.718880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.718896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.719123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.719140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.719230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.719246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.719480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.719496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.719650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.719666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.719894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.719911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.720071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.720087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.720311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.720327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.720527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.720543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.720699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.720715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.720814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.720830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.721046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.721063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.721274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.721289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.721444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.721461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.721621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.721640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.721822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.721854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.722030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.722047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.722160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.722177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.722331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.722348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.722456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.722472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.722676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.722692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.722792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.722808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.722986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.723003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.723157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.723173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.723334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.723352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.723506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.723524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.723692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.173 [2024-12-16 06:04:38.723708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.173 qpair failed and we were unable to recover it. 00:36:05.173 [2024-12-16 06:04:38.723943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.723961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.724052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.724067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.724277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.724294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.724501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.724517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.724667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.724682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.724906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.724923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.725034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.725049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.725154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.725171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.725399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.725415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.725593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.725609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.725794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.725810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.726014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.726031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.726181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.726198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.726450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.726466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.726628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.726647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.726823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.726841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.726997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.727014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.727261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.727277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.727418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.727435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.727600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.727615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.727771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.727787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.728020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.728038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.728265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.728281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.728427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.728442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.728647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.728664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.728841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.728863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.729038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.729054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.729210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.729226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.729454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.729471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.729641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.729657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.729890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.729906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.730092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.730107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.730276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.730293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.730460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.730476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.730717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.730735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.730879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.174 [2024-12-16 06:04:38.730898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.174 qpair failed and we were unable to recover it. 00:36:05.174 [2024-12-16 06:04:38.731103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.731119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.731331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.731347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.731518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.731535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.731711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.731727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.731885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.731902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.732119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.732135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.732366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.732383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.732523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.732541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.732697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.732714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.732942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.732958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.733133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.733150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.733261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.733277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.733499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.733516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.733707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.733723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.733943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.733960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.734062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.734078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.734234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.734251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.734341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.734356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.734588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.734604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.734692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.734710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.734941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.734958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.735194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.735210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.735417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.735433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.735609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.735625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.735781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.735796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.736040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.736059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.736294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.736311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.736417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.736433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.736594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.736611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.736793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.736809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.736963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.736981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.737189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.737206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.737375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.737392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.737558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.737574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.737781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.737797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.738003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.738019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.738244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.738262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.175 [2024-12-16 06:04:38.738445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.175 [2024-12-16 06:04:38.738463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.175 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.738629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.738645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.738791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.738807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.739035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.739052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.739199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.739215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.739303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.739320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.739549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.739565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.739788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.739804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.740065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.740082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.740241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.740259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.740434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.740449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.740682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.740698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.740935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.740952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.741106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.741122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.741219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.741236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.741478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.741494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.741652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.741669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.741812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.741828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.742071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.742088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.742296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.742312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.742472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.742488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.742776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.742792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.742883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.742900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.743092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.743108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.743279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.743296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.743446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.743462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.743714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.743730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.743910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.743928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.744113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.744128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.744367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.744383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.744589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.744605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.744769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.744785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.745000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.745016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.745167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.745184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.745339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.745356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.176 [2024-12-16 06:04:38.745618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.176 [2024-12-16 06:04:38.745634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.176 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.745742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.745757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.746004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.746021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.746296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.746313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.746419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.746435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.746613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.746630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.746772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.746788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.747011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.747029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.747293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.747309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.747533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.747549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.747705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.747721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.747901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.747918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.748113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.748129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.748232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.748248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.748406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.748422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.748569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.748588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.748774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.748790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.748997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.749013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.749247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.749264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.749424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.749440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.749666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.749682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.749860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.749877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.749985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.750001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.750109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.750126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.750266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.750281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.750362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.750378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.750536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.750552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.750803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.750820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.750976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.750994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.177 [2024-12-16 06:04:38.751157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.177 [2024-12-16 06:04:38.751173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.177 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.751397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.751414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.751564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.751581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.751749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.751765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.751970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.751988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.752147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.752164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.752376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.752392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.752624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.752639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.752857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.752874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.753089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.753105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.753353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.753368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.753624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.753639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.753729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.753745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.753951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.753969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.754175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.754191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.754358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.754374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.754609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.754625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.754777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.754793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.754932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.754949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.755158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.755174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.755425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.755443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.755694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.755711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.755923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.755940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.756164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.756180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.756338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.756354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.756583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.756599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.756844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.756864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.757046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.757071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.757289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.757305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.757487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.757503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.757767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.757783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.757883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.757900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.758062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.758079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.758237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.758253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.758484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.758501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.178 qpair failed and we were unable to recover it. 00:36:05.178 [2024-12-16 06:04:38.758658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.178 [2024-12-16 06:04:38.758675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.758764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.758781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.759000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.759019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.759162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.759178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.759334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.759349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.759611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.759632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.759830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.759845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.760094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.760110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.760282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.760298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.760399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.760415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.760560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.760576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.760740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.760757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.760857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.760873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.761015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.761031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.761261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.761278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.761373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.761389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.761493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.761509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.761659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.761677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.761818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.761834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.762040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.762058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.762280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.762296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.762536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.762552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.762800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.762816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.763061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.763079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.763315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.763332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.763536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.763553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.763698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.763714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.763880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.763898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.764149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.764165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.764390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.764406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.764559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.764575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.764653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.764670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.764839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.764864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.765070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.765083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.765155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.765168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.765399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.765411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.765566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.765578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.179 [2024-12-16 06:04:38.765854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.179 [2024-12-16 06:04:38.765867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.179 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.766040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.766052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.766216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.766230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.766425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.766437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.766585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.766597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.766762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.766774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.766942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.766956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.767106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.767118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.767301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.767316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.767517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.767529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.767604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.767617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.767766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.767778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.767947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.767962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.768071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.768083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.768233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.768245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.768487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.768501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.768703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.768714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.768919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.768932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.769135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.769147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.769296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.769308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.769543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.769556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.769740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.769753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.769943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.769956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.770104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.770116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.770291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.770304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.770465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.770477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.770650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.770662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.770839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.770857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.771006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.771018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.771173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.771186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.771363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.771376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.771545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.771557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.771718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.771730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.771955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.771969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.772119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.180 [2024-12-16 06:04:38.772132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.180 qpair failed and we were unable to recover it. 00:36:05.180 [2024-12-16 06:04:38.772283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.772304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.772473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.772489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.772577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.772593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.772739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.772755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.772920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.772938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.773170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.773186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.773339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.773354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.773510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.773528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.773636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.773652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.773824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.773840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.773996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.774013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.774153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.774169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.774274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.774290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.774450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.774469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.774682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.774698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.774932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.774949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.775153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.775169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.775389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.775405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.775636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.775652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.775827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.775844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.776078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.776094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.776271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.776287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.776455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.776472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.776636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.776651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.776895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.776913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.777092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.777109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.777318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.777334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.777423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.777440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.777577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.777592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.777692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.777709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.777810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.777826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.777983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.778000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.181 [2024-12-16 06:04:38.778151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.181 [2024-12-16 06:04:38.778167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.181 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.778338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.778354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.778559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.778575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.778673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.778689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.778832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.778853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.779059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.779075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.779250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.779267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.779365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.779381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.779567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.779590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.779821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.779837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.780017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.780034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.780220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.780238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.780387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.780403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.780558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.780574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.780730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.780746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.780910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.780928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.781012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.781027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.781238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.781254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.781410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.781426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.781588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.781604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.781809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.781826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.781985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.782002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.782240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.782256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.782514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.782530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.782687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.782702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.782908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.782926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.783124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.783140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.783302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.783318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.783484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.783500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.783674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.783691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.783926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.783943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.784150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.182 [2024-12-16 06:04:38.784167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.182 qpair failed and we were unable to recover it. 00:36:05.182 [2024-12-16 06:04:38.784346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.784362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.784544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.784560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.784793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.784809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.785025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.785044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.785185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.785200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.785422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.785438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.785653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.785669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.785772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.785787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.785941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.785958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.786112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.786129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.786335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.786352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.786500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.786517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.786750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.786767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.786870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.786888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.787135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.787152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.787243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.787259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.787464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.787481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.787715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.787730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.787891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.787909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.788138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.788154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.788390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.788407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.788642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.788659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.788809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.788825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.789078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.789095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.789200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.789215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.789321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.789338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.789508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.789524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.789686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.789701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.789857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.789874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.789964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.789981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.790155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.790172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.790418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.790435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.790591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.790608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.790760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.790776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.183 [2024-12-16 06:04:38.790928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.183 [2024-12-16 06:04:38.790944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.183 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.791021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.791038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.791203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.791219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.791389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.791405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.791687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.791703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.791852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.791868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.792140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.792156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.792381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.792397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.792603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.792619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.792792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.792810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.793025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.793045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.793258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.793271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.793499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.793513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.793668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.793680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.793835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.793853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.794000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.794012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.794282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.794293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.794384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.794396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.794491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.794503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.794611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.794623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.794773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.794785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.794990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.795004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.795106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.795119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.795357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.795369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.795461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.795474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.795652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.795664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.795771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.795784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.795945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.795958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.796143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.796156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.796396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.796410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.796621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.796635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.796731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.796743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.796886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.796900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.796996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.797008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.797182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.797194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.797430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.184 [2024-12-16 06:04:38.797444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.184 qpair failed and we were unable to recover it. 00:36:05.184 [2024-12-16 06:04:38.797628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.797640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.797857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.797871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.798055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.798069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.798290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.798303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.798539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.798552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.798811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.798825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.798993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.799006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.799184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.799197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.799309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.799321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.799528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.799540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.799777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.799792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.799948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.799961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.800109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.800121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.800352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.800364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.800573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.800589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.800774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.800787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.800976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.800989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.801073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.801085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.801182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.801195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.801296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.801309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.801526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.801538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.801615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.801627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.801768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.801781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.802031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.802043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.802146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.802159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.802310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.802322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.802475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.802488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.802640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.802653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.802889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.802902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.803044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.803056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.803286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.803299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.803374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.803386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.803485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.803498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.803641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.803653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.185 [2024-12-16 06:04:38.803812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.185 [2024-12-16 06:04:38.803824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.185 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.804016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.804030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.804131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.804144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.804350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.804364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.804596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.804608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.804770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.804783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.804867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.804879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.805032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.805045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.805251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.805263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.805427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.805439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.805659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.805672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.805884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.805897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.805982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.805993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.806127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.806140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.806293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.806304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.806441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.806453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.806686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.806698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.806938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.806951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.807123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.807135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.807364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.807375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.807627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.807643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.807856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.807869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.808036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.808048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.808250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.808263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.808434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.808445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.808662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.808674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.808880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.808893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.809131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.809142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.809227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.809239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.809386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.809398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.809561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.809573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.809715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.186 [2024-12-16 06:04:38.809727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.186 qpair failed and we were unable to recover it. 00:36:05.186 [2024-12-16 06:04:38.809912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.809924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.810076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.810088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.810242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.810254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.810486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.810498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.810718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.810730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.810889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.810901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.811063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.811076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.811283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.811295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.811471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.811482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.811656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.811668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.811901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.811913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.812073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.812085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.812181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.812192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.812372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.812384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.812520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.812532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.812736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.812747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.812975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.812987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.813153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.813165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.813365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.813377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.813537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.813548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.813701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.813713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.813869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.813882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.814039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.814051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.814233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.814245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.814496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.814507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.814750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.814763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.814965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.814977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.815222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.815234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.815387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.815401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.815652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.815664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.815740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.815752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.815986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.815998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.816134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.816145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.187 [2024-12-16 06:04:38.816356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.187 [2024-12-16 06:04:38.816369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.187 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.816625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.816637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.816888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.816901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.817120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.817132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.817357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.817369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.817574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.817586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.817738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.817749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.817966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.817978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.818132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.818144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.818361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.818372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.818600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.818611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.818850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.818862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.819022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.819033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.819115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.819126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.819357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.819369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.819543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.819555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.819695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.819707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.819909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.819921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.820152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.820163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.820435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.820446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.820667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.820679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.820857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.820870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.821102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.821114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.821266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.821278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.821426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.821437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.821643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.821655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.821805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.821817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.821907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.821919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.822168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.822181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.822359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.822371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.822660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.822671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.822831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.822843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.823025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.823038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.823177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.823188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.823440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.823451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.823537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.823551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.823766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.823777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.823873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.823885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.824022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.824033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.188 [2024-12-16 06:04:38.824234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.188 [2024-12-16 06:04:38.824245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.188 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.824395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.824408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.824643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.824655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.824895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.824908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.825138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.825150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.825309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.825320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.825465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.825476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.825622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.825635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.825872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.825884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.825988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.826000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.826158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.826170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.826317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.826329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.826419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.826430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.826618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.826630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.826835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.826850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.827096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.827107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.827325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.827337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.827615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.827628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.827783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.827795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.827955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.827968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.828054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.828065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.828267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.828280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.828486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.828498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.828679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.828690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.828855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.828867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.829021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.829034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.829189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.829201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.829300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.829311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.829545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.829557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.829758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.829769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.829993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.830005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.830207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.830218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.830380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.830392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.830621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.830633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.830719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.830730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.830935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.830947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.831122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.831137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.831226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.831238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.831458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.831471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.189 [2024-12-16 06:04:38.831625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.189 [2024-12-16 06:04:38.831637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.189 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.831880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.831894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.832037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.832048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.832202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.832214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.832353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.832365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.832609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.832621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.832789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.832800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.833008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.833021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.833177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.833189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.833293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.833304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.833508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.833520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.833723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.833735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.833881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.833894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.834064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.834075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.834212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.834224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.834431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.834442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.834688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.834699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.834866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.834879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.835109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.835122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.835375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.835387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.835622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.835634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.835838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.835855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.836083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.836096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.836266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.836278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.836420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.836431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.836660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.836672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.836808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.836820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.837046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.837059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.837146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.837157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.837414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.837426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.837672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.837684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.837858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.837871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.838010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.838022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.838252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.838263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.838412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.838424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.838655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.838667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.838761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.838773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.838928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.838943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.839110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.839122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.190 [2024-12-16 06:04:38.839269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.190 [2024-12-16 06:04:38.839281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.190 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.839519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.839530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.839698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.839710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.839934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.839946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.840101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.840113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.840261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.840272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.840427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.840439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.840663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.840675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.840889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.840901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.841171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.841183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.841337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.841349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.841577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.841588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.841762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.841773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.842022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.842035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.842190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.842202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.842416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.842428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.842646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.842658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.842881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.842893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.843150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.843162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.843367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.843379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.843633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.843645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.843791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.843803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.843961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.843974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.844200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.844212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.844307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.844318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.844414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.844426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.844652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.844664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.844898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.844911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.845168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.845180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.845409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.845421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.845576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.845587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.845791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.845803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.845976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.845989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.846229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.846241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.191 [2024-12-16 06:04:38.846376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.191 [2024-12-16 06:04:38.846387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.191 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.846540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.846553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.846707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.846719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.846902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.846915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.847155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.847171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.847399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.847411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.847562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.847573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.847778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.847790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.847944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.847957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.848098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.848111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.848286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.848297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.848377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.848388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.848593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.848606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.848806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.848817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.848991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.849003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.849162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.849175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.849378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.849390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.849622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.849634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.849868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.849880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.850132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.850144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.850306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.850317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.850544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.850556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.850791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.850803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.851078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.851091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.851308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.851321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.851546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.851558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.851834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.851850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.851986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.851998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.852223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.852235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.852387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.852398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.852554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.852567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.852809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.852830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.853051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.853073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.853258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.853274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.853526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.853542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.853782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.853797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.854020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.854036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.192 [2024-12-16 06:04:38.854243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.192 [2024-12-16 06:04:38.854258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.192 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.854416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.854432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.854533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.854549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.854690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.854706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.854866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.854882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.855042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.855057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.855265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.855280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.855507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.855526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.855799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.855814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.856027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.856043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.856273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.856289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.856447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.856462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.856615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.856630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.856792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.856808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.857040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.857056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.857269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.857285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.857392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.857408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.857579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.857595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.857806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.857822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.857977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.857993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.858227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.858242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.858385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.858401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.858636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.858652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.858899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.858915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.859132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.859147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.859305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.859321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.859503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.859519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.859674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.859689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.859842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.859861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.860022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.860038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.860201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.860216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.860381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.860396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.860648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.860664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.860871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.860887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.861064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.861082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.861316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.861331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.861564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.861580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.861758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.861773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.193 qpair failed and we were unable to recover it. 00:36:05.193 [2024-12-16 06:04:38.861951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.193 [2024-12-16 06:04:38.861968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.862105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.862120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.862375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.862390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.862555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.862571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.862778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.862793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.862957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.862973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.863184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.863199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.863358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.863372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.863599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.863615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.863771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.863790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.864025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.864042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.864272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.864287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.864393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.864408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.864585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.864601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.864703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.864717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.864945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.864961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.865056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.865071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.865322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.865337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.865571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.865586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.865796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.865811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.866069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.866085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.866241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.866256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.866483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.866498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.866732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.866748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.866841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.866861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.867032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.867047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.867300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.867316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.867474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.867489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.867731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.867746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.867903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.867919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.868014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.868030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.868250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.868266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.868408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.868423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.868638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.868654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.868873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.868888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.869040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.869055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.869317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.869341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.194 [2024-12-16 06:04:38.869603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.194 [2024-12-16 06:04:38.869619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.194 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.869770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.869785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.869987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.870005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.870185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.870201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.870359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.870375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.870583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.870598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.870812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.870828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.871006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.871023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.871249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.871264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.871436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.871452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.871540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.871556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.871701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.871716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.871874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.871891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.872164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.872180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.872409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.872425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.872599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.872615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.872850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.872867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.873027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.873043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.873271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.873287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.873379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.873395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.873567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.873583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.873727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.873742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.874000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.874016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.874226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.874241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.874474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.874490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.874728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.874743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.874916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.874935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.875167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.875183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.875418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.875434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.875535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.875551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.875790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.875806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.875904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.875920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.876175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.876190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.876417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.876432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.876575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.876590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.876832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.876853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.877084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.877100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.877187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.877202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.877373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.877388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.877601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.195 [2024-12-16 06:04:38.877616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.195 qpair failed and we were unable to recover it. 00:36:05.195 [2024-12-16 06:04:38.877780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.877796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.877899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.877916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.878133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.878148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.878396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.878412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.878645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.878661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.878815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.878831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.879039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.879056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.879220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.879236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.879469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.879485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.879633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.879648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.879799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.879815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.879973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.879989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.880152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.880168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.880343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.880359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.880596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.880612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.880772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.880787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.880964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.880981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.881130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.881145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.881351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.881366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.881509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.881524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.881723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.881738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.881842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.881863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.882015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.882031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.882208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.882223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.882365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.882381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.882633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.882649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.882891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.882908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.883068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.883087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.883312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.883327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.883566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.883582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.883755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.883771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.884003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.884020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.884251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.884267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.884434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.196 [2024-12-16 06:04:38.884450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.196 qpair failed and we were unable to recover it. 00:36:05.196 [2024-12-16 06:04:38.884547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.884563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.884769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.884785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.884994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.885010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.885167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.885183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.885422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.885437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.885593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.885609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.885818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.885839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.885995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.886011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.886154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.886170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.886327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.886342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.886570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.886586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.886823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.886839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.887078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.887094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.887193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.887209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.887446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.887462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.887613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.887629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.887784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.887799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.888005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.888022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.888122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.888137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.888276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.888292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.888442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.888458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.888641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.888656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.888883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.888899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.889133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.889149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.889315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.889330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.889483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.889498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.889757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.889772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.889936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.889952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.890182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.890199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.890297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.890312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.890559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.890575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.890668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.890684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.890943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.890959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.891191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.891210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.891439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.197 [2024-12-16 06:04:38.891455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.197 qpair failed and we were unable to recover it. 00:36:05.197 [2024-12-16 06:04:38.891683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.891698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.891951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.891967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.892119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.892135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.892236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.892251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.892427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.892442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.892662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.892678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.892908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.892924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.893180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.893195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.893403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.893419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.893671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.893687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.893894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.893910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.894116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.894134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.894339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.894354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.894533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.894548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.894765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.894780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.894960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.894976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.895213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.895228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.895461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.895477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.895577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.895592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.895832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.895853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.896020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.896036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.896144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.896159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.896362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.896377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.896580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.896596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.896888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.896905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.897122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.897138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.897287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.897302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.897576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.897593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.897766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.897782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.897994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.898010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.898229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.898245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.898427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.898443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.898618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.898634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.898745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.898761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.898937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.898953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.899122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.899139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.899295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.899312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.899496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.899513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.198 [2024-12-16 06:04:38.899621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.198 [2024-12-16 06:04:38.899639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.198 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.899867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.899884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.900115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.900131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.900233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.900249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.900358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.900375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.900545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.900562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.900733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.900748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.900888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.900905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.901168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.901184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.901284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.901300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.901539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.901556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.901644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.901660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.901833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.901853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.902006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.902022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.902110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.902126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.902358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.902374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.902600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.902615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.902857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.902874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.903113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.903128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.903304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.903320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.903471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.903487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.903655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.903671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.903817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.903832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.903995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.904012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.904239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.904254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.904404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.904420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.904570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.904585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.904749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.904766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.904907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.904923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.905072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.905087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.905262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.905279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.905510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.905526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.905760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.905777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.906011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.906027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.906268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.906284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.906460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.906477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.906632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.199 [2024-12-16 06:04:38.906647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.199 qpair failed and we were unable to recover it. 00:36:05.199 [2024-12-16 06:04:38.906794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.906810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.907043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.907059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.907209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.907224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.907397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.907416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.907662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.907678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.907829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.907844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.908022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.908037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.908160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.908175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.908406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.908422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.908651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.908666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.908912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.908928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.909134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.909150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.909310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.909327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.909534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.909549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.909727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.909744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.909899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.909915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.910148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.910164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.910326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.910342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.910499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.910515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.910676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.910692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.910776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.910792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.911003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.911019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.911114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.911129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.911309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.911324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.911498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.911515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.911744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.911761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.912036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.912053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.912306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.912321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.912532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.912549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.912755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.912771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.912880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.912898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.912996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.913011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.913216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.913232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.913395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.913412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.913585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.913601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.913784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.913799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.914006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.914022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.914128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.914144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.914296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.200 [2024-12-16 06:04:38.914311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.200 qpair failed and we were unable to recover it. 00:36:05.200 [2024-12-16 06:04:38.914391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.914406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.914559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.914576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.914719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.914736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.914820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.914836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.915075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.915093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.915243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.915259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.915486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.915502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.915729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.915744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.916004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.916021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.916259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.916274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.916507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.916522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.916678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.916693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.916834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.916856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.916959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.916974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.917179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.917196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.917346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.917361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.917526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.917542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.917695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.917711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.917919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.917936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.918016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.918031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.918262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.918278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.918446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.918462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.918567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.918583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.918742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.918757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.918940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.918956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.919125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.919141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.919310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.919327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.919561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.919576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.919760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.919775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.919919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.919935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.920182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.920198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.920382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.920398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.920576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.201 [2024-12-16 06:04:38.920591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.201 qpair failed and we were unable to recover it. 00:36:05.201 [2024-12-16 06:04:38.920808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.920825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.920932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.920949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.921203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.921219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.921444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.921461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.921563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.921578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.921751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.921767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.921929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.921946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.922091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.922106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.922264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.922281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.922512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.922528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.922780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.922796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.922952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.922973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.923069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.923085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.923296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.923312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.923457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.923474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.923639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.923656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.923868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.923887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.924058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.924075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.924249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.924266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.924464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.924482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.924582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.924598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.924708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.924724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.924892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.924908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.925059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.925075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.925351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.925368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.925543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.925560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.925660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.925676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.925818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.925834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.926022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.926038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.926249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.926264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.926501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.926517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.926681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.926697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.926900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.926918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.927079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.927095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.927302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.927317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.927410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.927426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.927514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.927529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.927756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.927772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.927873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.202 [2024-12-16 06:04:38.927890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.202 qpair failed and we were unable to recover it. 00:36:05.202 [2024-12-16 06:04:38.927967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.927982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.928145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.928163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.928384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.928400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.928579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.928594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.928818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.928835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.929020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.929038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.929245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.929262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.929479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.929494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.929772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.929788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.929879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.929894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.930055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.930072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.930301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.930317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.930410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.930427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.930581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.930596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.930746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.930762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.930929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.930945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.931046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.931061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.931244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.931261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.931409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.931424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.931639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.931655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.931822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.931839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.932050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.932067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.932242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.932258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.932479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.932494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.932689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.932705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.932811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.932829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.932952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.932975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.933078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.933091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.933236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.933249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.933402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.933415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.933619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.933632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.933771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.933784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.934016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.934030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.934120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.934132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.934289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.934301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.934399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.934412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.934572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.934584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.934669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.203 [2024-12-16 06:04:38.934681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.203 qpair failed and we were unable to recover it. 00:36:05.203 [2024-12-16 06:04:38.934836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.934854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.935025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.935144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.935259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.935454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.935561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.935782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.935895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.935998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.936097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.936184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.936333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.936538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.936702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.936885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.936983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.936996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.937136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.937147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.937294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.937307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.937523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.937535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.937762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.937774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.937932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.937946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.938054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.938067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.938271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.938283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.938535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.938547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.938632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.938645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.938747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.938759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.938930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.938944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.939102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.939115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.939226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.939240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.939380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.939391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.939522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.939534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.939675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.939687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.939862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.939877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.940113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.940126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.940216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.940229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.940329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.940343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.940443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.940455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.940677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.940691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.940844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.204 [2024-12-16 06:04:38.940860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.204 qpair failed and we were unable to recover it. 00:36:05.204 [2024-12-16 06:04:38.941002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.941014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.941156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.941169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.941338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.941351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.941425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.941438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.941538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.941549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.941781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.941793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.942017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.942030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.942236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.942251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.942531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.942544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.942686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.942699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.942949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.942962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.943048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.943060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.943171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.943184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.943336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.943348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.943481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.943493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.943693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.943708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.943912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.943925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.944095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.944107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.944316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.944329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.944539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.944551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.944734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.944746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.944895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.944908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.945063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.945074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.945160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.945172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.945321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.945333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.945478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.945491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.945642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.945653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.945815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.945829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.946062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.946075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.946288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.946301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.946530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.946542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.946786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.946799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.946950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.946962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.947061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.947073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.947296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.205 [2024-12-16 06:04:38.947310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.205 qpair failed and we were unable to recover it. 00:36:05.205 [2024-12-16 06:04:38.947469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.947481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.947684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.947700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.947876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.947890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.948101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.948115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.948250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.948263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.948360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.948372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.948576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.948589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.948819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.948831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.949081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.949094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.949275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.949288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.949373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.949385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.949615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.949629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.949766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.949778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.949994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.950008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.950191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.950203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.950317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.950328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.950482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.950495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.950657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.950671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.950763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.950775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.950870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.950883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.951023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.951038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.951198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.951212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.951369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.951382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.951601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.951614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.951845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.951861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.952033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.952045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.952146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.952159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.952315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.952328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.952475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.952487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.952679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.952693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.952854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.952867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.952971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.952984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.953075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.953087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.953173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.953185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.953347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.953359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.953514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.953529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.953755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.953768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.953857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.953869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.953969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.953982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.954132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.954143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.954303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.954316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.954498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.206 [2024-12-16 06:04:38.954510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.206 qpair failed and we were unable to recover it. 00:36:05.206 [2024-12-16 06:04:38.954755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.954768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.955000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.955013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.955267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.955281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.955384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.955395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.955650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.955663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.955971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.955995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.956161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.956177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.956328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.956344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.956523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.956538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.956763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.956781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.956973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.956991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.957144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.957160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.957267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.957284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.957380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.957398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.957638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.957654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.957844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.957867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.958034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.958052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.958212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.958228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.958335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.958351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.958656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.958672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.958824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.958840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.958947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.958965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.959168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.959184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.959393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.959409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.959569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.959585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.959796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.959813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.959969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.959985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.960216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.960232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.960345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.960361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.960592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.960609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.960759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.960776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.961012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.961028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.961138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.961157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.961311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.961327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.961429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.961446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.961588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.961604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.961683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.961699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.961929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.961945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.962038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.962053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.962194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.962210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.962389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.962406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.962512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.207 [2024-12-16 06:04:38.962528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.207 qpair failed and we were unable to recover it. 00:36:05.207 [2024-12-16 06:04:38.962772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.962788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.962969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.962986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.963146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.963162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.963397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.963414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.963575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.963592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.963839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.963861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.964074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.964090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.964249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.964266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.964420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.964436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.964696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.964723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.964962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.964980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.965091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.965107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.965269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.965285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.965394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.965410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.965561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.965577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.965816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.965833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.965961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.965982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.966080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.966098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.966251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.966268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.966425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.966442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.966648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.966664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.966893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.966910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.967071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.967087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.967270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.967286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.967450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.967465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.967691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.967707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.967912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.967930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.968159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.968174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.968259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.968275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.968439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.968454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.968553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.968569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.968751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.968767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.968946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.968962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.969980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.969995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.970134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.970149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.970238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.208 [2024-12-16 06:04:38.970254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.208 qpair failed and we were unable to recover it. 00:36:05.208 [2024-12-16 06:04:38.970480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.970496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.970602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.970617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.970897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.970913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.971005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.971021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.971104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.971120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.971213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.971229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.971461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.971478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.971650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.971665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.971912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.971928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.972157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.972174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.972348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.972363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.972456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.972471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.972624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.972641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.972734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.972750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.972945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.972965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.973043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.973059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.973244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.973262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.973437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.973453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.973549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.973564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.973837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.973858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.973970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.973988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.974197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.974213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.974319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.974335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.974454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.974481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.974696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.974712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.974863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.974880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.975091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.975108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.975324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.975340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.975444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.975459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.975560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.975577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.975798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.975814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.976028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.976044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.976204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.976221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.976375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.976391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.976645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.976661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.976809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.976825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.209 qpair failed and we were unable to recover it. 00:36:05.209 [2024-12-16 06:04:38.976916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.209 [2024-12-16 06:04:38.976933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.977088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.977103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.977198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.977214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.977370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.977386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.977655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.977671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.977819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.977835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.978920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.978937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.979115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.979131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.979223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.979238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.979330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.979347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.979429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.979445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.979738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.979757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.979926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.979943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.980035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.980051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.980208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.980224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.980376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.980392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.980545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.980560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.980762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.980778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.980884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.980900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.981021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.981037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.981124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.981140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.981224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.210 [2024-12-16 06:04:38.981239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.210 qpair failed and we were unable to recover it. 00:36:05.210 [2024-12-16 06:04:38.981343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.981358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.981600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.981616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.981731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.981748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.981937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.981954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.982053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.982069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.982230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.982246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.982408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.982426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.982604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.982620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.982857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.982874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.982991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.983007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.983098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.983114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.983331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.983346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.983459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.983475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.983726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.983743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.983945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.983962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.984068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.984084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.984176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.984198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.984377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.984393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.984499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.984515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.984692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.984707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.984890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.984907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.984992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.985008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.985099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.985116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.985265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.985281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.985439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.985455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.985684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.985700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.985912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.985930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.986031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.986058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.986156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.986172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.986327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.986344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.986458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.986474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.986646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.986662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.986763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.986779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.986969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.986987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.987087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.987111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.987282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.987299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.987446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.987463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.987616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.987632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.987873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.987891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.988077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.988093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.988192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.988209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.988308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.988326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.211 [2024-12-16 06:04:38.988552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.211 [2024-12-16 06:04:38.988568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.211 qpair failed and we were unable to recover it. 00:36:05.212 [2024-12-16 06:04:38.988785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.988804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.988899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.988915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.989111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.989127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.989298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.989315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.989471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.989487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.989727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.989744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.989961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.989978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.990135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.990151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.990334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.990350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.990594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.990611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.990812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.990830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.991000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.991016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.991177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.991194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.991410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.991426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.991528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.991546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.991775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.991792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.991981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.991997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.992143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.992158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.498 qpair failed and we were unable to recover it. 00:36:05.498 [2024-12-16 06:04:38.992373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.498 [2024-12-16 06:04:38.992389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.992618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.992634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.992784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.992801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.992896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.992913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.993058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.993074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.993230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.993246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.993393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.993410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.993577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.993594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.993684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.993699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.993795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.993814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.993900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.993918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.994062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.994077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.994219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.994236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.994330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.994346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.994571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.994587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.994671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.994687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.994970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.994987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.995904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.995921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.996012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.996028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.996138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.996156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.996389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.996404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.996593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.996608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.996767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.996783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.997040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.997056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.997221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.997236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.997318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.997335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.997500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.997517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.997657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.997674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.997831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.997852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.998020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.998040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.998191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.998208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.998370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.499 [2024-12-16 06:04:38.998387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.499 qpair failed and we were unable to recover it. 00:36:05.499 [2024-12-16 06:04:38.998585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.998600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.998763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.998780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.998867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.998883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.998990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.999006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.999155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.999170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.999315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.999331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.999428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.999446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.999611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.999627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.999768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.999785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:38.999960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:38.999976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.000209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.000226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.000328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.000343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.000515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.000532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.000762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.000777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.000865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.000881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.000989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.001006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.001111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.001128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.001295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.001311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.001510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.001525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.001666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.001682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.001828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.001843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.002020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.002037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.002132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.002148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.002243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.002259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.002393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.002412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.002598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.002614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.002714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.002730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.002967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.002985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.003093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.003110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.003290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.003307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.003383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.003399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.003490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.003508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.003693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.003710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.003919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.003937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.004086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.004103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.004336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.500 [2024-12-16 06:04:39.004352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.500 qpair failed and we were unable to recover it. 00:36:05.500 [2024-12-16 06:04:39.004432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.004448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.004607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.004622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.004777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.004793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.004956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.004973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.005069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.005086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.005238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.005256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.005409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.005426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.005637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.005654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.005879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.005896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.006053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.006069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.006232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.006249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.006446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.006462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.006657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.006674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.006883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.006901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.007007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.007023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.007190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.007209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.007417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.007433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.007528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.007545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.007702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.007718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.007790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.007806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.007972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.007990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.008917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.008933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.009817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.501 [2024-12-16 06:04:39.009834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.501 qpair failed and we were unable to recover it. 00:36:05.501 [2024-12-16 06:04:39.010000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.010105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.010226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.010401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.010567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.010670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.010796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.010909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.010925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.011025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.011043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.011274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.011290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.011439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.011456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.011540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.011556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.011661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.011677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.011767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.011783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.011940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.011957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.012884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.012900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.013980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.013995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.014143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.014159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.014270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.014288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.014460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.014475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.014563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.014580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.014669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.014685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.014834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.014853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.015004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.015020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.015105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.502 [2024-12-16 06:04:39.015121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.502 qpair failed and we were unable to recover it. 00:36:05.502 [2024-12-16 06:04:39.015215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.015231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.015311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.015328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.015420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.015436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.015530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.015546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.015692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.015707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.015867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.015884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.015966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.015982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.016914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.016997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.017981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.017997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.018140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.018155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.018303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.018320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.018395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.018410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.018553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.018569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.018712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.018728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.018813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.018829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.018982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.018999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.503 [2024-12-16 06:04:39.019107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.503 [2024-12-16 06:04:39.019122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.503 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.019198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.019213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.019286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.019303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.019384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.019400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.019486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.019502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.019583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.019600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.019741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.019758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.019897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.019915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.020929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.020946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.021978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.021994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.022946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.022962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.023117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.023133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.023211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.023227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.023387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.023403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.504 [2024-12-16 06:04:39.023493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.504 [2024-12-16 06:04:39.023508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.504 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.023590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.023607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.023693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.023708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.023920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.023936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.024044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.024321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.024432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.024584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.024697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.024798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.024894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.024988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.025166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.025273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.025379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.025546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.025724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.025824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.025934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.025951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.026040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.026055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.026226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.026242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.026317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.026333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.026475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.026490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.026699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.026715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.026800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.026820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.026929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.026945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.027941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.027956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.028043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.028060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.028217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.028232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.028402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.028418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.028554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.028570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.028654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.505 [2024-12-16 06:04:39.028669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.505 qpair failed and we were unable to recover it. 00:36:05.505 [2024-12-16 06:04:39.028822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.028838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.028948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.028964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.029933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.029950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.030841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.030862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.031842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.031999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.032112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.032276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.032415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.032524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.032622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.032727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.032883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.032902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.506 [2024-12-16 06:04:39.033063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.506 [2024-12-16 06:04:39.033080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.506 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.033285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.033302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.033443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.033459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.033612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.033629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.033712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.033728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.033807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.033822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.033980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.033996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.034930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.034947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.035877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.035894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.036926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.036943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.037024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.037041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.037192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.037209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.507 qpair failed and we were unable to recover it. 00:36:05.507 [2024-12-16 06:04:39.037301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.507 [2024-12-16 06:04:39.037317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.037393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.037410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.037509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.037525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.037603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.037619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.037774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.037789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.037869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.037889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.038912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.038929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.039951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.039967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.040930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.040945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.041040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.041056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.041140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.041157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.041250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.041266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.041409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.041425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.041501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.041517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.508 [2024-12-16 06:04:39.041596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.508 [2024-12-16 06:04:39.041611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.508 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.041758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.041775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.041866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.041883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.041959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.041976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.042931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.042948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.043097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.043113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.043267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.043283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.043443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.043459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.043539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.043554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.043643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.043658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.043797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.043813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.043935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.043951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.044944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.044959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.045956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.045972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.046054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.046069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.046163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.509 [2024-12-16 06:04:39.046178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.509 qpair failed and we were unable to recover it. 00:36:05.509 [2024-12-16 06:04:39.046279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.046295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.046368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.046383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.046549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.046565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.046639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.046655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.046751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.046766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.046837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.046857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.046929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.046945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.047964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.047980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.048939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.048954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.049965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.049980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.050058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.050073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.050230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.050245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.050328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.050343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.050432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.050447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.050613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.050629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.050709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.050724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.510 qpair failed and we were unable to recover it. 00:36:05.510 [2024-12-16 06:04:39.050818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.510 [2024-12-16 06:04:39.050833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.051971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.051987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.052136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.052151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.052295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.052310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.052498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.052513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.052752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.052772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.052922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.052939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.053144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.053160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.053269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.053284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.053387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.053402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.053676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.053692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.053898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.053915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.054019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.054035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.054210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.054226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.054383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.054399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.054472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.054488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.054586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.054602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.054806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.054821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.055001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.055018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.055123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.055138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.055228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.055244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.055413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.055429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.055594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.055610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.055791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.055807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.055915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.055932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.056142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.056157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.056261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.056277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.056459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.056475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.056625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.056641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.056815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.511 [2024-12-16 06:04:39.056830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.511 qpair failed and we were unable to recover it. 00:36:05.511 [2024-12-16 06:04:39.056978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.056995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.057203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.057219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.057373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.057389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.057581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.057596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.057773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.057789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.057952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.057968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.058108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.058124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.058302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.058318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.058471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.058486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.058640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.058656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.058749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.058765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.058845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.058865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.059023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.059039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.059106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.059121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.059287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.059303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.059444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.059463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.059627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.059642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.059837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.059857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.060012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.060028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.060244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.060259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.060409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.060425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.060593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.060609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.060778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.060794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.060971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.060987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.061099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.061115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.061277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.061293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.061402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.061417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.061592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.061608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.061740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.061755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.061853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.061869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.062008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.062024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.062118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.062133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.062291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.062306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.512 qpair failed and we were unable to recover it. 00:36:05.512 [2024-12-16 06:04:39.062391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.512 [2024-12-16 06:04:39.062406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.062649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.062665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.062855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.062872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.063122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.063138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.063245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.063261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.063541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.063556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.063796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.063812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.064001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.064018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.064177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.064193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.064290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.064305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.064410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.064425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.064522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.064540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.064767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.064783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.065015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.065031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.065184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.065200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.065352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.065368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.065543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.065558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.065708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.065723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.065866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.065882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.065964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.065979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.066122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.066138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.066287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.066303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.066445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.066464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.066641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.066657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.066816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.066832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.066922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.066937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.067023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.067038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.067213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.067229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.067346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.067362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.067522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.067538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.067708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.067724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.067864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.067880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.068035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.068051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.068143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.068158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.068320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.068335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.068433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.068449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.068635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.068651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.068802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.068818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.068966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.513 [2024-12-16 06:04:39.068983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.513 qpair failed and we were unable to recover it. 00:36:05.513 [2024-12-16 06:04:39.069219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.069235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.069483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.069498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.069750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.069766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.070007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.070023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.070243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.070259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.070572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.070588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.070763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.070779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.071024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.071041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.071221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.071237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.071341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.071357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.071539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.071554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.071693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.071709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.071858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.071874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.072027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.072043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.072217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.072234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.072305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.072320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.072412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.072427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.072649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.072665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.072943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.072959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.073179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.073195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.073392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.073408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.073586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.073602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.073813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.073829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.073953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.073972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.074137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.074153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.074248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.074264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.074406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.074421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.074666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.074682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.074910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.074926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.075157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.075172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.075317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.075332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.075510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.075525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.075676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.075691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.075899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.075915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.076121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.076137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.076319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.076335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.514 [2024-12-16 06:04:39.076442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.514 [2024-12-16 06:04:39.076458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.514 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.076660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.076675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.076933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.076950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.077111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.077126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.077286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.077302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.077512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.077528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.077756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.077771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.077925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.077954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.078049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.078065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.078216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.078232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.078467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.078483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.078633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.078649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.078871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.078888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.078989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.079005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.079231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.079247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.079404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.079420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.079660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.079676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.079827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.079843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.080063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.080079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.080187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.080203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.080354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.080370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.080631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.080646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.080790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.080806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.080979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.080996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.081223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.081239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.081415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.081431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.081524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.081540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.081690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.081711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.081934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.081951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.082109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.082125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.082218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.082234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.082407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.082423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.082608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.082623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.082779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.082794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.083018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.083035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.083197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.083212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.083419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.083435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.083610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.083626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.083845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.515 [2024-12-16 06:04:39.083866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.515 qpair failed and we were unable to recover it. 00:36:05.515 [2024-12-16 06:04:39.083952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.083968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.084179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.084194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.084354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.084370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.084647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.084662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.084889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.084906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.085120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.085136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.085292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.085308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.085454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.085470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.085735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.085761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.085988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.086005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.086108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.086124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.086378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.086394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.086504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.086520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.086604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.086620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.086778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.086794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.086892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.086913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.087018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.087034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.087135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.087151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.087352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.087368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.087623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.087638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.087894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.087911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.088010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.088025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.088137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.088153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.088252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.088268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.088510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.088526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.088760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.088776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.088887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.088904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.088992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.089008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.089186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.089202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.089386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.089403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.089557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.089573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.089734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.089750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.089857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.089874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.089959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.089975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.090135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.090150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.090312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.090327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.516 qpair failed and we were unable to recover it. 00:36:05.516 [2024-12-16 06:04:39.090586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.516 [2024-12-16 06:04:39.090602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.090842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.090864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.091018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.091034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.091193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.091209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.091327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.091343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.091518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.091535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.091779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.091799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.091955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.091972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.092116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.092131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.092344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.092360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.092459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.092474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.092649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.092665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.092884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.092900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.092994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.093010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.093185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.093200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.093304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.093320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.093585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.093601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.093697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.093713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.093944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.093961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.094127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.094143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.094263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.094281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.094441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.094457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.094610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.094626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.094875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.094892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.095004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.095019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.095165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.095181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.095349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.095365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.095477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.095492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.095695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.095711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.095876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.095893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.096052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.096068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.096161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.096176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.096328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.096343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.096455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.096474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.096700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.096715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.096902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.517 [2024-12-16 06:04:39.096919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.517 qpair failed and we were unable to recover it. 00:36:05.517 [2024-12-16 06:04:39.097150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.097166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.097392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.097407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.097513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.097529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.097766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.097783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.098019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.098036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.098149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.098164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.098277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.098292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.098526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.098541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.098702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.098718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.098955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.098971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.099147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.099163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.099268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.099283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.099436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.099452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.099697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.099713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.099817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.099832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.099986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.100001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.100153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.100169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.100316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.100332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.100500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.100515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.100768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.100783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.101025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.101042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.101215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.101231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.101400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.101416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.101653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.101669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.102028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.102049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.102224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.102240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.102388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.102403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.102613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.102629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.102841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.102863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.103023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.103038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.103142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.103158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.103312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.103328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.103432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.103447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.103550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.103565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.103778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.518 [2024-12-16 06:04:39.103794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.518 qpair failed and we were unable to recover it. 00:36:05.518 [2024-12-16 06:04:39.103882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.103899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.104002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.104018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.104171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.104187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.104300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.104316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.104475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.104491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.104657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.104673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.104826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.104842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.105011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.105027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.105198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.105214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.105310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.105325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.105502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.105518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.105769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.105786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.105943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.105960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.106123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.106138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.106347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.106363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.106536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.106551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.106766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.106785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.106892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.106909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.107091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.107106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.107256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.107271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.107508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.107523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.107696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.107713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.107920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.107936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.108092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.108108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.108254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.108270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.108420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.108435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.108599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.108615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.108840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.108869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.109012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.109028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.109185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.109202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.109360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.109375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.109593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.109609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.109829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.109845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.110040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.110056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.110217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.110232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.110487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.110503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.110684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.110700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.110959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.110975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.519 [2024-12-16 06:04:39.111218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.519 [2024-12-16 06:04:39.111234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.519 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.111346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.111362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.111473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.111489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.111630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.111645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.111828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.111844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.112060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.112075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.112182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.112198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.112426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.112442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.112651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.112667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.112765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.112781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.113039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.113056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.113278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.113293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.113383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.113398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.113573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.113589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.113769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.113784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.113936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.113953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.114163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.114179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.114335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.114350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.114434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.114451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.114550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.114569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.114757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.114773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.114981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.114998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.115154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.115170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.115329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.115345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.115612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.115628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.115858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.115874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.116079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.116095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.116239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.116255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.116487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.116503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.116686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.116701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.116882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.116899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.117114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.117130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.117233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.117249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.117506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.117522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.117769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.117784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.117871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.117887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.117990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.520 [2024-12-16 06:04:39.118006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.520 qpair failed and we were unable to recover it. 00:36:05.520 [2024-12-16 06:04:39.118102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.118119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.118220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.118236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.118390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.118405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.118579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.118595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.118764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.118780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.119027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.119043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.119229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.119244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.119403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.119418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.119648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.119664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.119867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.119886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.119963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.119979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.120064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.120080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.120174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.120189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.120344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.120359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.120511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.120526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.120751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.120766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.120972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.120988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.121167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.121183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.121278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.121293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.121504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.121519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.121694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.121710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.121858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.121874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.122104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.122120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.122351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.122367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.122465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.122481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.122746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.122762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.122918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.122935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.123074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.123089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.123191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.123207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.123365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.123381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.521 [2024-12-16 06:04:39.123560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.521 [2024-12-16 06:04:39.123575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.521 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.123800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.123817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.123914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.123930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.124160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.124175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.124324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.124340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.124505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.124520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.124742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.124757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.124956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.124973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.125076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.125091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.125189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.125205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.125365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.125380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.125573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.125588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.125872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.125888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.126062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.126077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.126243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.126259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.126380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.126396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.126659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.126674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.126859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.126875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.126967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.126982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.127089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.127104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.127209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.127227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.127386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.127402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.127581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.127596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.127737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.127753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.127912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.127927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.128008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.128023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.128108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.128123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.128227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.128243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.128319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.128335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.128518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.128534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.128763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.128778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.129021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.129038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.129183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.129199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.129316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.129332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.129566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.129582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.129741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.129758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.129985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.130001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.130095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.130111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.130198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.130213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.522 qpair failed and we were unable to recover it. 00:36:05.522 [2024-12-16 06:04:39.130420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.522 [2024-12-16 06:04:39.130435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.130692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.130708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.130913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.130930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.131157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.131173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.131330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.131347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.131582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.131598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.131803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.131819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.132009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.132025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.132201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.132219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.132390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.132406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.132584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.132600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.132758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.132773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.132990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.133006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.133121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.133137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.133372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.133388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.133591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.133606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.133810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.133826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.134035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.134052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.134296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.134311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.134486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.134501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.134597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.134612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.134755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.134771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.134945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.134968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.135253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.135269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.135489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.135504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.135613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.135630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.135802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.135817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.136051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.136068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.136277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.136292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.136392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.136408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.136504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.136520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.136700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.136715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.136973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.136989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.137150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.137165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.137266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.137281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.137384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.137407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.137516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.137531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.137742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.137757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.137911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.137928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.523 qpair failed and we were unable to recover it. 00:36:05.523 [2024-12-16 06:04:39.138086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.523 [2024-12-16 06:04:39.138102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.138243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.138258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.138487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.138503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.138662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.138677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.138843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.138863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.139017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.139033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.139126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.139142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.139300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.139315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.139534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.139550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.139648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.139663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.139867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.139883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.139989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.140005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.140103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.140119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.140264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.140279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.140370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.140385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.140588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.140603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.140756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.140772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.141007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.141023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.141121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.141137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.141301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.141317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.141421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.141436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.141527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.141542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.141776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.141792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.141952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.141971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.142074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.142089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.142205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.142220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.142385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.142400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.142641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.142657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.142808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.142824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.142942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.142958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.143116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.143131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.143228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.143244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.143417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.143432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.143610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.143625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.143775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.143791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.144017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.144033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.144205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.524 [2024-12-16 06:04:39.144221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.524 qpair failed and we were unable to recover it. 00:36:05.524 [2024-12-16 06:04:39.144318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.144333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.144490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.144505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.144749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.144765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.145020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.145037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.145111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.145127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.145223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.145239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.145480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.145495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.145754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.145770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.145940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.145957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.146163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.146178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.146351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.146366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.146547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.146564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.146733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.146748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.146906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.146925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.147130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.147145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.147323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.147339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.147453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.147469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.147684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.147700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.147866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.147884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.148043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.148058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.148216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.148233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.148338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.148356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.148507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.148523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.148695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.148711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.148797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.148815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.149005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.149022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.149186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.149201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.149381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.149397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.149501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.149517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.149619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.149636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.149739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.149755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.149917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.149936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.150083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.150099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.150254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.150271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.150363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.150379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.150485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.150502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.150657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.150674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.150772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.150788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.150947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.150964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.525 [2024-12-16 06:04:39.151061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.525 [2024-12-16 06:04:39.151077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.525 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.151170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.151189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.151292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.151309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.151453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.151469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.151636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.151653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.151864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.151882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.152040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.152165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.152325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.152449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.152581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.152695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.152869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.152997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.153013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.153172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.153188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.153289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.153308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.153426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.153442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.153623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.153638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.153824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.153839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.154058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.154075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.154165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.154182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.154346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.154363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.154517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.154534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.154743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.154761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.154929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.154947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.155041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.155057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.155258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.155275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.155388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.155405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.155585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.155604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.155798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.155813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.155917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.155933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.156165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.156181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.156283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.156300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.526 [2024-12-16 06:04:39.156400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.526 [2024-12-16 06:04:39.156417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.526 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.156581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.156596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.156772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.156788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.156939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.156956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.157113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.157130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.157246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.157262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.157358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.157373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.157513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.157529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.157680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.157696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.157789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.157805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.157915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.157931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.158089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.158105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.158271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.158287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.158442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.158458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.158637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.158653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.158814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.158830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.158929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.158946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.159179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.159196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.159351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.159368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.159533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.159549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.159645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.159661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.159819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.159836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.160087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.160107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.160260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.160277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.160458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.160475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.160713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.160729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.160969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.160986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.161256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.161272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.161413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.161429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.161596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.161612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.161712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.161728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.161884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.161903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.162109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.162125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.162278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.162294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.162384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.162400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.162578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.162595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.162711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.162729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.527 qpair failed and we were unable to recover it. 00:36:05.527 [2024-12-16 06:04:39.162914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.527 [2024-12-16 06:04:39.162931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.163075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.163091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.163245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.163261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.163510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.163528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.163688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.163704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.163871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.163888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.164145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.164161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.164366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.164382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.164611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.164626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.164764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.164780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.164863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.164879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.165029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.165046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.165195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.165214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.165441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.165457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.165597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.165612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.165864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.165881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.166090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.166106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.166313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.166329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.166554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.166569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.166738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.166754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.166902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.166918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.167079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.167094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.167253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.167269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.167477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.167493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.167724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.167740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.167892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.167908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.168064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.168080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.168217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.168233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.168387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.168403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.168556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.168572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.168748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.168763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.168945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.168962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.169107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.169123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.169363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.169379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.169675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.169691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.169903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.169919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.528 [2024-12-16 06:04:39.170096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.528 [2024-12-16 06:04:39.170112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.528 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.170309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.170325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.170511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.170526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.170704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.170720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.170881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.170897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.171086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.171102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.171351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.171367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.171464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.171480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.171710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.171726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.171889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.171905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.172148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.172163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.172263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.172279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.172533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.172549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.172770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.172786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.172995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.173011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.173099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.173115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.173347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.173362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.173621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.173640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.173874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.173891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.174123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.174139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.174298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.174313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.174543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.174559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.174659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.174675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.174905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.174920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.175096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.175111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.175358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.175375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.175482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.175497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.175639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.175655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.175815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.175831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.176002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.176018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.176160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.176176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.176268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.176284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.176383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.176398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.176554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.176570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.176726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.176743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.176885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.176901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.177147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.177163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.177320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.177335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.177516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.177532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.529 qpair failed and we were unable to recover it. 00:36:05.529 [2024-12-16 06:04:39.177707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.529 [2024-12-16 06:04:39.177723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.177981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.177998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.178174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.178190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.178347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.178363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.178580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.178595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.178697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.178713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.178866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.178882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.179065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.179081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.179179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.179194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.179423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.179438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.179600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.179615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.179719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.179735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.179981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.179997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.180149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.180164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.180283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.180299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.180554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.180570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.180770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.180786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.181018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.181034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.181137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.181155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.181317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.181333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.181431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.181447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.181631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.181647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.181746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.181761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.181990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.182007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.182196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.182211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.182410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.182425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.182597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.182613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.182765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.182780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.182964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.182980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.183097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.183114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.183278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.183293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.183384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.183401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.183555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.183571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.183741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.183757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.183924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.183940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.184139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.184155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.530 [2024-12-16 06:04:39.184259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.530 [2024-12-16 06:04:39.184274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.530 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.184487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.184503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.184673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.184689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.184852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.184868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.185027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.185043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.185148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.185163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.185266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.185283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.185363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.185379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.185636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.185652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.185779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.185799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.185902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.185919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.186075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.186090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.186182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.186197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.186291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.186306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.186484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.186499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.186585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.186601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.186768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.186784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.186885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.186901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.187065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.187081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.187295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.187311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.187417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.187432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.187622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.187638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.187718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.187736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.187892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.187909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.188025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.188040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.188154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.188169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.188332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.188348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.188453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.188468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.188631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.188647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.188806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.188822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.188989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.189005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.189112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.189127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.189219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.189234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.189325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.189340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.531 qpair failed and we were unable to recover it. 00:36:05.531 [2024-12-16 06:04:39.189603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.531 [2024-12-16 06:04:39.189618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.189776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.189791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.190011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.190028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.190136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.190151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.190313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.190329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.190503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.190518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.190672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.190688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.190838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.190858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.191017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.191033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.191134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.191149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.191243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.191260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.191548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.191564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.191715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.191731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.191908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.191927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.192022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.192038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.192231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.192247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.192355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.192372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.192576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.192592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.192825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.192840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.193056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.193072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.193168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.193183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.193274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.193289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.193397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.193412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.193645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.193661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.193816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.193832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.194022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.194045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.194205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.194217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.194436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.194448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.194673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.194688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.194861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.194873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.194979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.194991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.195082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.195092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.195183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.195195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.195277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.195288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.195393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.195405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.195584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.195596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.195742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.195753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.195857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.195868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.196052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.196063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.532 qpair failed and we were unable to recover it. 00:36:05.532 [2024-12-16 06:04:39.196210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.532 [2024-12-16 06:04:39.196222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.196363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.196374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.196559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.196572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.196675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.196686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.196901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.196914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.197009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.197021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.197132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.197143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.197278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.197290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.197358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.197369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.197537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.197549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.197776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.197787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.197993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.198005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.198184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.198195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.198346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.198358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.198447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.198459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.198613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.198625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.198729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.198740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.198952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.198965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.199048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.199060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.199141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.199152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.199231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.199242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.199413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.199425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.199599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.199610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.199688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.199699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.199918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.199931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.200012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.200023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.200160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.200172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.200275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.200286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.200449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.200461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.200632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.200646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.200898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.200910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.201006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.201017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.201160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.201171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.201265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.201276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.201370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.201382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.201611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.201622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.533 [2024-12-16 06:04:39.201786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.533 [2024-12-16 06:04:39.201797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.533 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.201873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.201884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.202059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.202070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.202161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.202172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.202374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.202385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.202475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.202486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.202622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.202633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.202791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.202802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.202948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.202960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.203067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.203078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.203267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.203279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.203453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.203464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.203654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.203665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.203866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.203877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.204024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.204036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.204133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.204144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.204239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.204251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.204456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.204468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.204564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.204575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.204755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.204767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.204944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.204955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.205048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.205058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.205144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.205155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.205311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.205322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.205504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.205515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.205657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.205668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.205804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.205816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.205963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.205975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.206065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.206076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.206182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.206193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.206362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.206373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.206527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.206539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.206685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.206696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.206897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.206911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.207064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.207076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.207165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.207176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.207260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.207271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.207380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.207392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.207559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.207570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.534 qpair failed and we were unable to recover it. 00:36:05.534 [2024-12-16 06:04:39.207770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.534 [2024-12-16 06:04:39.207780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.207986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.207999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.208100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.208111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.208315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.208327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.208414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.208425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.208493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.208504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.208705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.208717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.208869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.208882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.208999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.209010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.209111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.209123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.209297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.209308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.209465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.209477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.209562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.209573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.209744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.209755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.209971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.209983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.210118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.210129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.210271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.210282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.210387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.210397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.210610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.210622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.210789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.210801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.210952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.210964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.211058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.211069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.211141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.211152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.211356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.211367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.211537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.211549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.211759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.211771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.212009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.212022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.212183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.212195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.212279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.212289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.212553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.212565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.212813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.212825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.212996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.213008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.213099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.213110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.535 [2024-12-16 06:04:39.213275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.535 [2024-12-16 06:04:39.213287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.535 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.213488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.213503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.213600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.213611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.213825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.213837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.213998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.214009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.214098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.214109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.214208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.214219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.214323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.214334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.214586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.214597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.214746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.214757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.214906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.214918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.215012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.215023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.215182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.215193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.215287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.215298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.215551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.215562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.215728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.215739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.215897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.215910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.216114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.216126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.216276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.216287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.216370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.216381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.216617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.216628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.216830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.216841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.216930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.216942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.217010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.217020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.217178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.217189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.217284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.217295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.217451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.217463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.217737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.217748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.217893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.217905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.218055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.218067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.218217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.218229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.218323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.218334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.218553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.218564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.218663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.218674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.218845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.218860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.536 [2024-12-16 06:04:39.219103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.536 [2024-12-16 06:04:39.219115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.536 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.219253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.219264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.219369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.219381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.219695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.219708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.219857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.219869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.220021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.220032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.220186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.220199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.220359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.220371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.220525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.220537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.220634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.220645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.220794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.220806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.220989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.221100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.221200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.221296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.221413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.221509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.221661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.221890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.221902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.222059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.222071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.222232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.222244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.222455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.222467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.222629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.222640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.222787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.222799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.222960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.222971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.223062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.223074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.223228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.223239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.223327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.223339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.223566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.223578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.223724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.223735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.223886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.223899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.223991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.224003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.224187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.224199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.224365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.224390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.224631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.224648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.224881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.224898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.225051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.225066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.225257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.225272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.225423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.225439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.537 [2024-12-16 06:04:39.225597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.537 [2024-12-16 06:04:39.225613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.537 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.225861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.225877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.226048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.226063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.226232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.226248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.226353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.226369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.226557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.226572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.226737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.226752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.226999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.227019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.227243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.227259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.227417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.227433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.227595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.227611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.227819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.227835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.228026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.228042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.228202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.228217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.228368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.228383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.228481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.228496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.228653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.228668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.228905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.228921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.229032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.229048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.229221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.229236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.229395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.229411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.229585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.229600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.229840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.229861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.229951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.229967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.230172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.230188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.230373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.230389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.230550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.230565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.230663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.230678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.230832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.230851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.231004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.231020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.231187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.231203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.231458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.231474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.231627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.231643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.231872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.231888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.232008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.232024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.232255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.232267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.232487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.232498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.232722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.232733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.538 qpair failed and we were unable to recover it. 00:36:05.538 [2024-12-16 06:04:39.232939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.538 [2024-12-16 06:04:39.232950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.233113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.233125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.233276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.233289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.233555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.233567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.233662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.233674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.233837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.233851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.233999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.234011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.234242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.234253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.234425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.234438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.234603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.234616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.234771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.234783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.235009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.235021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.235165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.235176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.235268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.235279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.235483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.235495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.235766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.235778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.235998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.236010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.236178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.236189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.236342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.236354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.236622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.236633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.236803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.236814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.236968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.236980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.237160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.237171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.237397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.237409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.237663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.237674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.237903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.237914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.238008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.238018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.238117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.238128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.238281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.238292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.238448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.238459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.238667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.238679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.238822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.238833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.239096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.239108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.239339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.239351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.239561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.239572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.239723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.239734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.239906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.239924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.539 [2024-12-16 06:04:39.240082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.539 [2024-12-16 06:04:39.240098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.539 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.240267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.240283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.240507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.240523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.240606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.240622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.240780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.240796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.240964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.240981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.241184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.241200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.241351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.241367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.241515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.241530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.241781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.241796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.242056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.242072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.242210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.242225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.242486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.242505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.242685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.242701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.242877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.242893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.243122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.243137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.243296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.243311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.243568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.243583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.243698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.243714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.243800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.243815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.243970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.243986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.244125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.244141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.244390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.244406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.244617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.244632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.244864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.244881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.245126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.245142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.245306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.245322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.245555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.245570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.245725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.245740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.245942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.245958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.246067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.246082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.246243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.246259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.246423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.246439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.246690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.246705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.246955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.246973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.247137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.540 [2024-12-16 06:04:39.247153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.540 qpair failed and we were unable to recover it. 00:36:05.540 [2024-12-16 06:04:39.247305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.247320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.247506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.247521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.247683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.247699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.247982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.247998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.248241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.248253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.248473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.248484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.248733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.248745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.248894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.248906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.249110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.249122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.249224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.249235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.249381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.249392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.249595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.249607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.249763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.249774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.249969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.249980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.250053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.250063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.250213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.250224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.250521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.250536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.250671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.250682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.250859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.250871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.251078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.251090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.251292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.251303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.251389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.251400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.251647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.251659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.251889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.251901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.252102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.252113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.252261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.252272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.252439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.252450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.252604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.252616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.252851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.252865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.253025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.253036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.253144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.253157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.253302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.253313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.253546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.253557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.253699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.253711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.253891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.253903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.254053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.254064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.254152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.254163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.254364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.254376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.254602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.254613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.254761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.254772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.541 qpair failed and we were unable to recover it. 00:36:05.541 [2024-12-16 06:04:39.254948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.541 [2024-12-16 06:04:39.254960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.255068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.255079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.255179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.255191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.255338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.255356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.255545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.255560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.255794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.255810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.256008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.256024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.256323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.256339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.256515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.256530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.256684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.256699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.256820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.256836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.257087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.257102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.257212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.257227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.257391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.257407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.257525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.257540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.257790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.257805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.257958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.257978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.258085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.258101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.258180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.258196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.258303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.258318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.258525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.258541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.258637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.258652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.258813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.258828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.258995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.259010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.259176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.259191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.259288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.259304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.259411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.259426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.259596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.259611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.259817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.259833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.259954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.259970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.260119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.260134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.260273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.260289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.260477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.260492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.260671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.260687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.260938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.260954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.261138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.261153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.261332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.261347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.261571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.261587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.261818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.261834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.262051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.262067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.262226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.262241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.262424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.262439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.542 [2024-12-16 06:04:39.262616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.542 [2024-12-16 06:04:39.262632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.542 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.262808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.262823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.263081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.263093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.263252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.263263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.263513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.263525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.263785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.263796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.264049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.264062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.264167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.264179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.264359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.264370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.264516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.264527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.264778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.264790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.264965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.264977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.265144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.265155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.265253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.265264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.265406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.265417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.265524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.265535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.265681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.265694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.265872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.265883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.265985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.265997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.266101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.266112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.266250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.266262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.266407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.266418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.266715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.266727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.266970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.266982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.267137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.267148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.267296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.267308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.267565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.267576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.267730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.267740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.267896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.267908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.268059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.268070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.268218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.268229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.268388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.268399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.268575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.268587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.268830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.268841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.269004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.269015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.269177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.269189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.269392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.269403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.269559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.269570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.269798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.269811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.269997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.270009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.270089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.270099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.270186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.543 [2024-12-16 06:04:39.270199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.543 qpair failed and we were unable to recover it. 00:36:05.543 [2024-12-16 06:04:39.270282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.270293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.270374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.270386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.270538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.270549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.270711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.270723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.270879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.270891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.271025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.271035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.271192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.271204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.271298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.271309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.271449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.271461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.271679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.271690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.271923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.271935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.272027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.272038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.272190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.272202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.272297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.272308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.272481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.272493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.272596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.272607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.272745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.272756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.272903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.272915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.273062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.273074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.273173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.273184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.273344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.273355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.273533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.273545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.273800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.273811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.273961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.273972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.274130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.274141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.274219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.274230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.274316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.274327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.274479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.274490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.274659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.274670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.274818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.274830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.274918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.274928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.275071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.275083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.275180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.275191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.275301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.275313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.275508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.275520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.275607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.275618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.544 [2024-12-16 06:04:39.275789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.544 [2024-12-16 06:04:39.275801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.544 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.275945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.275956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.276068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.276080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.276169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.276183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.276276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.276287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.276432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.276443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.276655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.276666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.276901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.276913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.277050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.277061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.277142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.277152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.277287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.277298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.277435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.277446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.277596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.277608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.277756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.277767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.277977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.277990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.278145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.278156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.278251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.278261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.278551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.278563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.278733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.278744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.278905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.278916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.279024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.279034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.279247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.279259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.279474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.279485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.279713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.279725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.279897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.279909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.280113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.280124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.280231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.280243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.545 qpair failed and we were unable to recover it. 00:36:05.545 [2024-12-16 06:04:39.280408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.545 [2024-12-16 06:04:39.280418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.280531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.280542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.280766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.280778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.280870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.280881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.280980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.280992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.281134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.281145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.281303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.281316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.281413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.281424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.281678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.281690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.281779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.281790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.282009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.282020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.282127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.282138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.282292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.282303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.282414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.282425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.282656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.282668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.282814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.282826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.282968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.282982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.283089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.283101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.283250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.283261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.283461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.283472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.283622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.283634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.283785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.283796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.283999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.284011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.284112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.284124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.284320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.284331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.284531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.284542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.284746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.284757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.284852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.284865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.285031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.285042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.285247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.285258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.285408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.285419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.285510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.285521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.285675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.285687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.285778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.285789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.286042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.286054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.286140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.286151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.286309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.286320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.286459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.286471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.286723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.286734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.286912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.286924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.287105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.287117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.287271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.287282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.287497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.287508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.287747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.287758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.288014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.288026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.288172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.546 [2024-12-16 06:04:39.288182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.546 qpair failed and we were unable to recover it. 00:36:05.546 [2024-12-16 06:04:39.288336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.288346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.288501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.288513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.288674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.288685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.288829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.288840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.288995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.289007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.289185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.289196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.289356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.289367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.289538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.289550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.289694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.289705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.289850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.289861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.290067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.290083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.290292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.290303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.290391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.290402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.290494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.290505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.290677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.290687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.290852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.290865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.290946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.290957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.291059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.291071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.291156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.291167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.291304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.291316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.291477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.291487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.291583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.291595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.291800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.291811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.291941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.291952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.292059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.292071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.292216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.292227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.292378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.292389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.292570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.292581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.292806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.292817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.292960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.292971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.293117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.293128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.293354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.293365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.293447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.293458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.293609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.293621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.293855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.293867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.294070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.294081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.294170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.294181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.294326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.294337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.294604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.294616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.294865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.294877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.295100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.295112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.295266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.295278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.295421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.295433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.295519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.295530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.547 [2024-12-16 06:04:39.295685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.547 [2024-12-16 06:04:39.295698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.547 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.295891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.295903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.296063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.296074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.296277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.296288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.296462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.296474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.296640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.296651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.296862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.296878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.296980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.296991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.297239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.297250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.297412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.297423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.297668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.297679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.297888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.297899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.298146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.298157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.298374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.298385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.298632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.298644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.298814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.298826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.299019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.299030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.299211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.299222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.299376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.299388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.299532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.299543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.299694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.299706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.299799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.299810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.299900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.299911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.300137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.300148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.300303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.300314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.300555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.300566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.300772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.300783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.300882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.300895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.301054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.301065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.301173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.301185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.301398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.301409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.301504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.301515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.301681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.301693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.301775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.301786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.301939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.301952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.302153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.302166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.302261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.302273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.302355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.548 [2024-12-16 06:04:39.302366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.548 qpair failed and we were unable to recover it. 00:36:05.548 [2024-12-16 06:04:39.302573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.302585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.302730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.302741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.302825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.302836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.303018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.303030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.303198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.303209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.303294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.303305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.303441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.303452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.303538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.303549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.303699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.303712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.303929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.303941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.304155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.304166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.304329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.304340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.304575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.304586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.304837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.304852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.305006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.305017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.305115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.305126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.305231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.305242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.305396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.305408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.305506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.305517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.305742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.305754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.305968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.305979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.306125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.306137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.306244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.306255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.306415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.306426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.306594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.306605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.306856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.306867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.307030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.307041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.307187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.307198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.307285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.307296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.307431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.307442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.307604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.307615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.307753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.307764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.307912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.307924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.308062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.308073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.308156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.308167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.308321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.308332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.308423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.308435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.308543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.308554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.308820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.308831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.308932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.308945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.309103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.309114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.309317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.309328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.309565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.309576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.309724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.309736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.549 [2024-12-16 06:04:39.309948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.549 [2024-12-16 06:04:39.309960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.549 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.310102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.310113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.310260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.310272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.310501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.310512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.310754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.310768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.311020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.311032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.311137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.311148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.311317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.311328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.311606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.311618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.311807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.311818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.312074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.312086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.312313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.312324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.312480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.312491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.312625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.312636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.312781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.312793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.312970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.312982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.313133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.313144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.313305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.313316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.313419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.313430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.313595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.313606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.313757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.313768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.313925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.313936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.314083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.314094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.314272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.314283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.314434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.314445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.314696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.314707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.314857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.314869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.315012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.315023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.315227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.315238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.315420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.315432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.315683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.315695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.315936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.315948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.316097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.316108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.316203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.316214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.316367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.316378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.316538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.316550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.316703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.316714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.316884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.316896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.316975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.316986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.317209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.317220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.317387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.317399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.317508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.317520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.317693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.317704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.317856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.550 [2024-12-16 06:04:39.317868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.550 qpair failed and we were unable to recover it. 00:36:05.550 [2024-12-16 06:04:39.317955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.317969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.318223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.318235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.318437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.318447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.318618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.318629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.318798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.318809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.318964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.318975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.319127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.319138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.319218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.319229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.319395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.319406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.319666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.319678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.319841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.319857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.319952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.319963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.320096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.320108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.320361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.320372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.320551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.320563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.320766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.320776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.320928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.320940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.321097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.321109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.321264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.321276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.321466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.321478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.321629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.321640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.321800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.321811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.321894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.321905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.322082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.322093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.322175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.322186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.322318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.322329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.322548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.322559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.322766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.322778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.322969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.322981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.323124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.323136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.323206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.323216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.323363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.323375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.323593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.323604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.323755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.323766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.323845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.323859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.324006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.324018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.324187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.324198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.324408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.324420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.324643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.324654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.324815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.324826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.324995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.325012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.325166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.325176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.325330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.325341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.325442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.325454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.325669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.551 [2024-12-16 06:04:39.325680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.551 qpair failed and we were unable to recover it. 00:36:05.551 [2024-12-16 06:04:39.325905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.552 [2024-12-16 06:04:39.325917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.552 qpair failed and we were unable to recover it. 00:36:05.552 [2024-12-16 06:04:39.326023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.552 [2024-12-16 06:04:39.326035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.552 qpair failed and we were unable to recover it. 00:36:05.552 [2024-12-16 06:04:39.326191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.326203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.326360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.326371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.326533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.326545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.326709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.326719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.326970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.326981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.327125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.327136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.327219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.327230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.327380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.327391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.327464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.327475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.327621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.327633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.327731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.327742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.327878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.327889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.328056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.328227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.328377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.328478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.328592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.328685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.328906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.328994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.329153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.329322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.329420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.329524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.329685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.329871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.329986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.329998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.330075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.330086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.330283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.330295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.330452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.330464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.330593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.330604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.330758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.330770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.330976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.330988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.331160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.331173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.331339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.331351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.331524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.835 [2024-12-16 06:04:39.331535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.835 qpair failed and we were unable to recover it. 00:36:05.835 [2024-12-16 06:04:39.331752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.331764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.331942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.331953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.332033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.332044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.332204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.332215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.332321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.332332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.332563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.332576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.332720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.332731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.332814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.332825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.333013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.333025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.333225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.333235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.333412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.333424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.333636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.333648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.333802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.333813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.334006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.334017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.334114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.334125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.334213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.334224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.334321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.334333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.334479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.334490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.334719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.334731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.334930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.334942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.335243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.335254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.335407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.335418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.335644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.335656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.335803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.335814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.336097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.336109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.336243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.336255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.336403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.336414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.336553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.336564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.336708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.336720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.336788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.336799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.336945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.336958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.337179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.337191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.337284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.337295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.337516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.337528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.337669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.337680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.337911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.337922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.338001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.338012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.338116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.338129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.338214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.836 [2024-12-16 06:04:39.338225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.836 qpair failed and we were unable to recover it. 00:36:05.836 [2024-12-16 06:04:39.338478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.338489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.338638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.338649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.338791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.338803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.338900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.338911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.339019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.339031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.339120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.339131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.339284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.339296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.339505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.339516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.339655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.339665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.339897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.339909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.340043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.340055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.340284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.340295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.340382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.340393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.340550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.340561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.340807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.340818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.340935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.340947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.341113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.341123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.341222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.341233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.341319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.341330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.341474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.341486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.341632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.341643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.341790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.341801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.341972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.341985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.342157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.342169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.342307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.342318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.342520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.342532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.342684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.342695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.342899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.342911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.343136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.343147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.343288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.343299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.343394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.343404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.343493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.343504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.343670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.343681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.343850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.343863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.344004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.344015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.344170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.344181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.344328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.344339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.344510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.344522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.837 [2024-12-16 06:04:39.344682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.837 [2024-12-16 06:04:39.344696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.837 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.344938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.344950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.345152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.345163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.345388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.345399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.345485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.345496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.345715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.345727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.345888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.345900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.346021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.346031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.346233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.346245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.346423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.346434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.346599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.346610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.346696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.346707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.346780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.346791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.346888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.346900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.347053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.347065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.347326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.347338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.347472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.347483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.347638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.347650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.347744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.347755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.347956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.347970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.348223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.348234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.348452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.348464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.348675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.348686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.348831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.348842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.349093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.349105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.349202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.349213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.349303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.349314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.349532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.349544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.349644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.349656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.349895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.349906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.350066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.350078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.350171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.350182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.350291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.350302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.350403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.350414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.350664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.350676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.350823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.350834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.350988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.351000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.351099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.351110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.351311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.351322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.838 [2024-12-16 06:04:39.351474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.838 [2024-12-16 06:04:39.351486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.838 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.351631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.351642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.351715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.351726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.351872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.351884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.352041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.352052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.352215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.352227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.352317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.352329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.352560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.352571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.352745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.352756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.352969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.352981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.353187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.353199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.353296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.353308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.353457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.353468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.353679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.353691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.353850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.353863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.354025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.354037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.354192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.354204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.354365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.354377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.354581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.354593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.354730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.354742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.354901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.354914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.355051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.355063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.355215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.355225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.355328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.355339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.355416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.355427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.355590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.355602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.355804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.355815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.355970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.355982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.356141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.356154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.356298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.356309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.356502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.356513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.356748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.356759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.356944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.356956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.357092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.839 [2024-12-16 06:04:39.357104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.839 qpair failed and we were unable to recover it. 00:36:05.839 [2024-12-16 06:04:39.357251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.357263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.357417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.357428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.357515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.357526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.357672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.357684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.357845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.357860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.358014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.358025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.358162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.358173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.358360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.358371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.358471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.358482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.358709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.358720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.358882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.358894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.358965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.358976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.359157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.359168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.359246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.359257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.359408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.359420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.359691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.359703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.359950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.359962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.360117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.360128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.360197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.360208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.360410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.360421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.360579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.360590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.360770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.360782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.360947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.360959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.361104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.361114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.361263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.361274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.361420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.361431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.361588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.361600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.361849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.361860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.362084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.362096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.362259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.362270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.362422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.362434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.362660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.362672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.362888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.362900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.362987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.362997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.363136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.363150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.363231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.363241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.363390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.363402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.363627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.363638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.363837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.840 [2024-12-16 06:04:39.363852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.840 qpair failed and we were unable to recover it. 00:36:05.840 [2024-12-16 06:04:39.364079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.364091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.364306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.364317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.364462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.364473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.364652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.364663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.364810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.364821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.365000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.365013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.365162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.365173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.365350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.365361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.365589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.365601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.365752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.365763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.365911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.365923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.366098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.366110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.366335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.366347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.366588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.366599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.366745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.366755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.366923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.366936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.367085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.367096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.367189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.367201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.367273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.367284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.367426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.367437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.367665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.367676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.367841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.367855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.368088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.368100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.368256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.368267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.368458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.368469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.368619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.368630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.368870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.368883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.369085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.369096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.369324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.369336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.369608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.369619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.369840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.369855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.370061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.370073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.370296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.370307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.370533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.370544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.370685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.370696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.370871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.370886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.371046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.371057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.371210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.371220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.371301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.841 [2024-12-16 06:04:39.371312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.841 qpair failed and we were unable to recover it. 00:36:05.841 [2024-12-16 06:04:39.371404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.371414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.371545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.371557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.371633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.371643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.371784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.371796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.371891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.371903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.372107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.372119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.372314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.372326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.372545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.372556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.372785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.372795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.372944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.372955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.373113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.373125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.373354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.373365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.373612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.373623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.373852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.373865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.374041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.374051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.374225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.374237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.374390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.374401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.374576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.374587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.374738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.374750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.374997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.375009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.375228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.375240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.375337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.375347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.375539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.375551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.375778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.375790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.376018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.376030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.376250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.376261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.376487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.376499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.376723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.376734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.376960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.376972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.377194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.377205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.377428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.377439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.377647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.377658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.377755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.377767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.377978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.377990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.378142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.378152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.378331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.378343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.378434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.378448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.378708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.378720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.842 [2024-12-16 06:04:39.378977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.842 [2024-12-16 06:04:39.378989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.842 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.379219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.379229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.379430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.379441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.379532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.379543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.379773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.379784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.379986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.379997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.380101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.380112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.380199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.380209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.380384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.380395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.380474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.380485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.380570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.380581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.380739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.380750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.381010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.381021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.381236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.381248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.381396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.381406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.381552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.381564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.381790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.381802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.382032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.382044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.382205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.382216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.382436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.382448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.382588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.382599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.382837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.382853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.383060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.383071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.383165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.383176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.383400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.383411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.383638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.383650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.383873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.383885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.384127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.384139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.384344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.384355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.384525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.384535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.384753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.384765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.384940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.384952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.385120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.385132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.385355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.385366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.385614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.385626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.843 [2024-12-16 06:04:39.385825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.843 [2024-12-16 06:04:39.385836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.843 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.386090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.386102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.386280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.386291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.386491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.386505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.386712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.386723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.386885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.386896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.387137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.387148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.387312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.387322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.387495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.387506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.387756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.387767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.388011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.388023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.388246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.388258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.388511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.388523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.388668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.388679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.388902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.388914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.389065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.389076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.389166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.389176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.389273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.389285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.389513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.389525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.389686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.389697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.389843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.389858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.390042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.390055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.390276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.390287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.390436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.390447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.390663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.390675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.390879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.390891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.391028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.391039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.391271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.391282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.391502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.391513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.391734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.391746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.391974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.391986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.392235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.392246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.392450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.392461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.392609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.392620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.392798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.392809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.393059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.393071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.393151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.393162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.393312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.393323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.393528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.844 [2024-12-16 06:04:39.393539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.844 qpair failed and we were unable to recover it. 00:36:05.844 [2024-12-16 06:04:39.393740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.393752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.393902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.393914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.394061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.394073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.394219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.394231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.394392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.394406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.394627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.394638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.394870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.394882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.395174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.395186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.395325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.395336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.395573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.395585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.395799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.395810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.396038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.396050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.396146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.396158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.396372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.396383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.396607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.396618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.396790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.396802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.396993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.397004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.397234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.397245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.397438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.397449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.397619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.397631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.397864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.397876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.398098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.398109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.398264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.398275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.398489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.398500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.398652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.398663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.398816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.398827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.398982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.398994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.399214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.399226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.399372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.399384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.399604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.399615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.399791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.399802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.399966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.399978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.400133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.400145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.400289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.400301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.400443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.400455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.400673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.400684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.400833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.400844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.401071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.401083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.401298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.845 [2024-12-16 06:04:39.401309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.845 qpair failed and we were unable to recover it. 00:36:05.845 [2024-12-16 06:04:39.401404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.401415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.401506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.401517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.401667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.401678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.401903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.401915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.402087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.402098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.402323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.402338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.402506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.402517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.402665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.402676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.402831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.402843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.403023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.403035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.403260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.403272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.403429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.403440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.403600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.403612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.403779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.403790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.404023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.404035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.404281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.404293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.404521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.404532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.404681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.404693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.404908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.404920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.405067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.405078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.405242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.405253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.405457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.405469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.405603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.405614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.405845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.405859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.406088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.406099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.406266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.406277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.406418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.406429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.406654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.406665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.406811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.406822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.406988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.406999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.407171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.407182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.407317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.407328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.407437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.407458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.407690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.407706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.407863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.407880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.408131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.408147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.408392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.408407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.408662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.408678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.846 [2024-12-16 06:04:39.408906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.846 [2024-12-16 06:04:39.408922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.846 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.409156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.409171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.409330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.409345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.409587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.409602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.409765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.409780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.410009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.410025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.410179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.410195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.410442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.410461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.410562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.410577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.410824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.410840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.411076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.411091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.411249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.411264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.411446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.411461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.411721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.411736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.411985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.412001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.412111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.412126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.412344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.412360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.412608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.412623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.412868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.412883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.413121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.413137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.413341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.413356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.413584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.413599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.413780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.413795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.414027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.414043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.414208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.414223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.414475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.414491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.414655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.414671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.414826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.414841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.414935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.414950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.415183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.415199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.415362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.415377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.415590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.415605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.415835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.415856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.416118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.416133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.416341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.416357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.416458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.416470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.416704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.416715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.416917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.416929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.417095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.417107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.847 [2024-12-16 06:04:39.417328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.847 [2024-12-16 06:04:39.417340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.847 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.417529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.417541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.417689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.417700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.417931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.417943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.418110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.418121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.418267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.418278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.418499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.418510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.418657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.418669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.418862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.418876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.419125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.419137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.419304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.419315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.419485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.419496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.419645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.419656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.419828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.419839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.420100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.420112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.420317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.420328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.420486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.420497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.420725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.420736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.420965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.420977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.421163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.421174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.421338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.421349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.421550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.421561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.421811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.421823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.422010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.422022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.422247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.422258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.422459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.422470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.422667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.422679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.422909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.422921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.423146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.423157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.423326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.423338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.423554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.423565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.423735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.423746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.423903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.423915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.424067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.424079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.424223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.848 [2024-12-16 06:04:39.424235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.848 qpair failed and we were unable to recover it. 00:36:05.848 [2024-12-16 06:04:39.424444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.424462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.424681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.424696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.424948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.424965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.425203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.425219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.425374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.425389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.425620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.425636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.425797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.425813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.425995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.426010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.426244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.426259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.426415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.426431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.426583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.426598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.426811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.426826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.426986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.427002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.427231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.427252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.427508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.427523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.427678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.427693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.427981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.427997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.428160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.428176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.428332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.428347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.428600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.428615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.428811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.428826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.429086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.429102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.429269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.429284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.429541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.429556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.429786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.429801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.430038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.430054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.430263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.430278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.430497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.430512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.430670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.430686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.430845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.430865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.431035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.431051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.431296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.431312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.431494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.431510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.431652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.431667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.431849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.431865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.432018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.432034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.432259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.432275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.432440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.432455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.849 qpair failed and we were unable to recover it. 00:36:05.849 [2024-12-16 06:04:39.432684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.849 [2024-12-16 06:04:39.432700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.432882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.432898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.433003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.433017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.433192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.433204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.433450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.433461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.433697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.433708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.433938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.433950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.434211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.434223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.434449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.434460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.434630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.434642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.434795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.434806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.435025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.435037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.435188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.435200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.435400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.435411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.435648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.435660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.435830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.435844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.436006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.436018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.436237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.436249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.436404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.436415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.436555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.436566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.436708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.436719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.436924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.436936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.437078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.437089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.437159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.437170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.437341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.437352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.437504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.437515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.437680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.437691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.437781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.437792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.438037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.438049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.438209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.438220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.438397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.438409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.438614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.438625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.438773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.438783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.439001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.439012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.439181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.439192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.439338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.439349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.439442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.439454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.439609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.850 [2024-12-16 06:04:39.439620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.850 qpair failed and we were unable to recover it. 00:36:05.850 [2024-12-16 06:04:39.439786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.439798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.439949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.439961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.440179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.440190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.440327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.440338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.440616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.440634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.440745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.440760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.440937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.440953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.441115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.441131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.441310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.441325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.441476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.441491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.441724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.441740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.441912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.441928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.442089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.442104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.442345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.442360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.442541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.442556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.442796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.442812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.442972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.442988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.443145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.443163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.443330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.443345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.443582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.443597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.443826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.443841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.444116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.444132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.444238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.444253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.444502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.444517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.444756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.444772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.444980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.444996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.445210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.445225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.445385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.445401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.445625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.445641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.445793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.445808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.445959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.445976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.446131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.446147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.446286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.446301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.446479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.446495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.446648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.446663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.446914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.446930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.447035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.447050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.447293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.447308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.447542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.851 [2024-12-16 06:04:39.447558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.851 qpair failed and we were unable to recover it. 00:36:05.851 [2024-12-16 06:04:39.447765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.447779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.447942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.447958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.448173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.448188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.448423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.448438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.448669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.448684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.448944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.448959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.449186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.449198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.449352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.449363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.449585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.449596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.449748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.449760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.449924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.449936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.450161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.450173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.450398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.450410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.450573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.450585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.450817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.450829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.451077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.451090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.451238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.451249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.451484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.451495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.451647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.451659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.451888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.451900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.451981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.451992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.452194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.452205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.452426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.452437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.452678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.452690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.452930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.452942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.453191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.453203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.453340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.453351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.453523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.453534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.453738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.453750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.453912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.453924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.454069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.454080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.454232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.454243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.454390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.454401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.454638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.852 [2024-12-16 06:04:39.454650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.852 qpair failed and we were unable to recover it. 00:36:05.852 [2024-12-16 06:04:39.454738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.454749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.454911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.454923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.455094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.455105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.455184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.455195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.455418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.455429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.455634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.455645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.455811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.455823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.455996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.456008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.456153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.456164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.456386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.456397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.456572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.456583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.456690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.456704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.456854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.456865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.457033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.457045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.457135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.457147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.457298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.457309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.457462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.457473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.457605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.457617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.457861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.457872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.458095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.458106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.458256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.458267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.458452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.458463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.458708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.458719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.458959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.458971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.459144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.459155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.459359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.459370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.459518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.459529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.459732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.459743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.459985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.459996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.460271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.460282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.460364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.460376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.460626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.460637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.460792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.460804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.461028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.461039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.461252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.461264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.461413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.461424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.461509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.461520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.461653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.461665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.461878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.853 [2024-12-16 06:04:39.461890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.853 qpair failed and we were unable to recover it. 00:36:05.853 [2024-12-16 06:04:39.461979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.461990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.462084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.462096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.462297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.462308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.462475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.462486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.462711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.462723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.462825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.462836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.462987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.462998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.463150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.463161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.463340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.463352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.463522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.463533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.463736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.463747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.463991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.464003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.464229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.464242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.464493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.464505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.464730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.464741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.464880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.464892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.465094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.465105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.465200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.465211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.465365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.465376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.465533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.465544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.465742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.465754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.466006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.466018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.466268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.466279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.466427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.466438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.466585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.466597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.466738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.466749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.466907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.466920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.467098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.467109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.467254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.467265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.467471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.467482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.467665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.467677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.467809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.467821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.468051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.468062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.468265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.468276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.468499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.468511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.468674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.468685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.468835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.468850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.469078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.469089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.469239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.854 [2024-12-16 06:04:39.469250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.854 qpair failed and we were unable to recover it. 00:36:05.854 [2024-12-16 06:04:39.469431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.469442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.469650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.469662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.469862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.469874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.470045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.470057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.470288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.470299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.470454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.470465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.470690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.470701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.470839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.470853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.471079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.471090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.471188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.471200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.471401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.471412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.471658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.471669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.471941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.471953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.472205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.472219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.472419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.472430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.472600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.472612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.472752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.472763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.472904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.472916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.473152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.473163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.473315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.473327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.473555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.473566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.473776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.473788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.474024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.474036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.474194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.474205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.474429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.474441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.474690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.474701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.474952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.474964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.475071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.475082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.475308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.475319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.475490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.475501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.475653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.475664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.475813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.475825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.475920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.475932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.476086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.476097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.476252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.476264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.476410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.476421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.476653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.476665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.476749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.476760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.477004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.855 [2024-12-16 06:04:39.477016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.855 qpair failed and we were unable to recover it. 00:36:05.855 [2024-12-16 06:04:39.477121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.477132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.477281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.477293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.477505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.477516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.477728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.477739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.477937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.477949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.478176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.478188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.478283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.478294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.478429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.478440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.478611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.478622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.478792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.478803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.478950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.478962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.479060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.479071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.479220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.479232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.479457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.479468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.479668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.479682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.479842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.479859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.480028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.480039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.480212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.480223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.480377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.480389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.480543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.480554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.480716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.480727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.480861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.480872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.481085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.481096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.481272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.481284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.481490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.481501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.481703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.481715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.481891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.481903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.482152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.482163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.482319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.482330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.482491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.482502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.482724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.482735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.482917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.482929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.483154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.483165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.483390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.483401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.483567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.483578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.483802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.483813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.484038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.484050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.484228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.484240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.484446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.856 [2024-12-16 06:04:39.484457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.856 qpair failed and we were unable to recover it. 00:36:05.856 [2024-12-16 06:04:39.484603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.484615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.484855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.484867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.485051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.485062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.485302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.485313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.485514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.485526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.485727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.485739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.485940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.485952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.486095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.486106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.486261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.486273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.486481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.486492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.486636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.486647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.486876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.486889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.486974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.486985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.487162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.487174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.487411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.487423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.487607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.487620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.487788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.487799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.488013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.488024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.488262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.488299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.488590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.488627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.488939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.488978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.489209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.489246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.489458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.489508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.489673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.489685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.489922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.489934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.490125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.490137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.490340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.490351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.490500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.490526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.490808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.490845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.491193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.491215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.491427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.491438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.491678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.491689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.491909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.491948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.492162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.492199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.857 [2024-12-16 06:04:39.492485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.857 [2024-12-16 06:04:39.492522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.857 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.492748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.492759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.492984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.492995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.493182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.493219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.493458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.493495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.493735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.493773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.494090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.494129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.494234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.494245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.494437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.494474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.494762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.494800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.495049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.495087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.495282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.495320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.495562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.495599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.495726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.495737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.495818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.495830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.496038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.496050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.496233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.496270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.496471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.496508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.496748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.496785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.497099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.497137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.497430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.497468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.497626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.497671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.497882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.497921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.498140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.498177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.498319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.498356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.498621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.498632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.498896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.498935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.499142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.499180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.499391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.499428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.499635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.499646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.499867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.499878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.500145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.500182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.500468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.500505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.500765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.500776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.500941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.500979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.501279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.501291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.501545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.501582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.501841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.501891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.502107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.502145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.858 [2024-12-16 06:04:39.502433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.858 [2024-12-16 06:04:39.502475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.858 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.502715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.502726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.502945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.502957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.503207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.503237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.503552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.503589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.503870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.503909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.504207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.504244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.504444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.504481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.504730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.504741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.504978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.505017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.505219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.505256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.505543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.505580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.505807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.505818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.506047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.506058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.506306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.506318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.506521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.506532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.506804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.506842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.507121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.507159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.507425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.507436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.507660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.507671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.507823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.507835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.508064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.508102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.508402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.508447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.508726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.508737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.508922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.508961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.509158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.509195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.509397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.509408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.509610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.509621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.509787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.509824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.510155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.510193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.510395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.510407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.510556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.510567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.510739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.510750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.510974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.511013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.511323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.511361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.511647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.511658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.511900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.511913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.512013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.512024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.512282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.512320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.859 [2024-12-16 06:04:39.512630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.859 [2024-12-16 06:04:39.512668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.859 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.512905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.512944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.513240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.513278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.513546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.513583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.513803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.513842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.514087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.514124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.514385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.514407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.514633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.514644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.514864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.514876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.515059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.515071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.515233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.515244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.515402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.515413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.515630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.515641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.515742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.515753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.515953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.515965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.516168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.516180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.516349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.516361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.516582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.516593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.516790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.516801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.516948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.516960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.517197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.517234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.517516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.517553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.517818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.517829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.518053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.518099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.518351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.518362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.518499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.518510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.518602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.518613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.518782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.518793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.518955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.518993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.519306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.519343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.519437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.519448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.519592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.519603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.519740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.519752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.519974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.519986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.520085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.520096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.520303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.520315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.520523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.520560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.520893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.520933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.521140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.521178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.860 [2024-12-16 06:04:39.521409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.860 [2024-12-16 06:04:39.521421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.860 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.521518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.521530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.521711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.521722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.521822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.521871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.522104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.522141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.522471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.522509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.522709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.522750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.522895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.522907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.523089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.523100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.523319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.523356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.523556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.523593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.523889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.523928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.524223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.524263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.524483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.524494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.524665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.524676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.524901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.524912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.525023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.525058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.525353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.525391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.525679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.525716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.526009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.526047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.526316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.526353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.526660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.526697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.526896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.526935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.527178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.527215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.527494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.527531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.527712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.527723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.527955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.527992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.528216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.528254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.528570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.528616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.528764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.528775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.528914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.528926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.529169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.529206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.529421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.529458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.529777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.529815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.530122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.530160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.861 qpair failed and we were unable to recover it. 00:36:05.861 [2024-12-16 06:04:39.530457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.861 [2024-12-16 06:04:39.530494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.530806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.530843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.531082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.531120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.531435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.531472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.531729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.531741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.531904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.531916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.532088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.532125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.532415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.532452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.532735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.532746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.532979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.532991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.533166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.533177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.533423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.533460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.533749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.533786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.534009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.534047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.534334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.534372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.534662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.534699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.535003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.535042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.535343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.535380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.535535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.535572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.535871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.535909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.536198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.536236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.536504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.536541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.536778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.536789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.536963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.536975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.537210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.537249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.537544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.537580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.537859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.537871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.538076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.538114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.538341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.538378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.538689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.538703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.538855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.538866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.539019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.539031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.539202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.539237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.539407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.539443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.539783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.539820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.540062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.540100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.540331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.540368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.540671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.540682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.540904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.540943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.862 qpair failed and we were unable to recover it. 00:36:05.862 [2024-12-16 06:04:39.541181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.862 [2024-12-16 06:04:39.541219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.541508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.541519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.541745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.541756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.542007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.542045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.542272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.542309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.542637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.542673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.542898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.542937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.543241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.543278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.543547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.543584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.543877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.543915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.544191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.544228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.544496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.544534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.544752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.544788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.545068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.545107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.545396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.545433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.545717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.545754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.545920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.545959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.546189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.546227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.546404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.546415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.546563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.546575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.546725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.546736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.546903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.546915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.547073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.547111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.547337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.547374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.547684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.547722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.548048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.548060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.548215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.548227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.548464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.548476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.548574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.548585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.548740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.548751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.548947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.548962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.549065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.549076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.549218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.549255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.549570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.549607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.549899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.549939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.550235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.550273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.550522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.550533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.550668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.550680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.550776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.550803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.863 qpair failed and we were unable to recover it. 00:36:05.863 [2024-12-16 06:04:39.551103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.863 [2024-12-16 06:04:39.551141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.551363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.551400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.551690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.551728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.551956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.551995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.552211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.552247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.552478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.552515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.552718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.552756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.552966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.552977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.553192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.553229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.553363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.553376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.553603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.553642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.553921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.553960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.554129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.554167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.554353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.554428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.554627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.554672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.554834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.554860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.555085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.555118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.555414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.555446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.555761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.555802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.556052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.556072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.556183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.556200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.556358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.556390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.556604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.556636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.556858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.556893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.557190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.557223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.557373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.557405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.557695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.557737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.557990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.558025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.558170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.558203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.558391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.558423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.558695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.558728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.558909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.558942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.559146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.559180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.559473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.559490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.559704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.559719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.559909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.559926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.560165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.560181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.560433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.560450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.560603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.560619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.560866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.864 [2024-12-16 06:04:39.560900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.864 qpair failed and we were unable to recover it. 00:36:05.864 [2024-12-16 06:04:39.561094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.561126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.561269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.561302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.561485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.561501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.561732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.561764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.561960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.561995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.562265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.562304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.562501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.562533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.562802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.562835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.563135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.563169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.563366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.563399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.563694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.563727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.564026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.564060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.564270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.564303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.564480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.564497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.564642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.564674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.564924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.564958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.565191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.565224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.565421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.565453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.565644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.565677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.565821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.565838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.566080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.566114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.566403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.566436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.566710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.566742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.566957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.566991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.567105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.567139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.567331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.567363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.567630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.567663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.567861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.567878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.568090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.568106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.568272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.568305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.568623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.568656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.568903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.568938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.569144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.569176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.569442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.569476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.569663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.569696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.569985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.570019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.570148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.570181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.570471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.570505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.570698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.570714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.865 qpair failed and we were unable to recover it. 00:36:05.865 [2024-12-16 06:04:39.570977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.865 [2024-12-16 06:04:39.571011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.571258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.571292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.571498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.571514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.571694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.571727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.571992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.572026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.572290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.572323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.572597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.572639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.572858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.572893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.573107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.573139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.573282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.573314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.573509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.573541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.573723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.573756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.574025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.574059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.574306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.574338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.574526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.574543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.574704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.574737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.574915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.574950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.575134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.575167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.575386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.575419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.575544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.575577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.575843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.575886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.576103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.576136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.576258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.576291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.576511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.576527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.576626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.576642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.576745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.576762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.576919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.576937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.577048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.577081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.577294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.577330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.577520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.577552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.577726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.577743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.577863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.577881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.578044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.578086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.578277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.578310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.866 [2024-12-16 06:04:39.578509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.866 [2024-12-16 06:04:39.578549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.866 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.578758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.578774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.578895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.578939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.579115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.579148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.579294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.579327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.579594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.579627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.579826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.579868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.580056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.580088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.580284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.580316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.580606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.580622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.580764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.580780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.580960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.580976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.581067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.581083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.581181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.581214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.581417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.581449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.581557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.581590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.581731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.581747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.581908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.581925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.582110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.582126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.582226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.582242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.582413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.582429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.582525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.582548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.582644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.582660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.582853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.582870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.582969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.582985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.583088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.583104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.583281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.583314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.583561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.583593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.583711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.583743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.583937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.583954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.584102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.584118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.584319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.584336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.584491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.584525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.584724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.584756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.585030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.585065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.585250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.585283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.585498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.585531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.585655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.585687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.585902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.585936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.586131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.586165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.867 qpair failed and we were unable to recover it. 00:36:05.867 [2024-12-16 06:04:39.586363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.867 [2024-12-16 06:04:39.586396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.586640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.586678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.586874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.586908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.587121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.587154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.587401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.587433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.587550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.587584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.587804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.587837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.588030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.588063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.588283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.588317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.588531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.588564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.588681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.588722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.588983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.589000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.589249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.589281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.589420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.589452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.589632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.589664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.589857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.589891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.590162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.590194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.590394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.590427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.590545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.590578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.590793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.590825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.591031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.591064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.591193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.591225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.591347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.591379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.591528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.591560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.591750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.591782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.591908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.591941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.592083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.592114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.592307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.592339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.592463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.592501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.592682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.592713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.592953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.592970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.593133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.593166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.593291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.593323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.593512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.593545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.593748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.593764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.593975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.593992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.594093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.594108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.594255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.594271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.594373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.594389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.594475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.868 [2024-12-16 06:04:39.594491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.868 qpair failed and we were unable to recover it. 00:36:05.868 [2024-12-16 06:04:39.594597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.594613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.594774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.594790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.594879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.594896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.595050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.595066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.595225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.595241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.595384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.595400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.595647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.595677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.595882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.595916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.596108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.596140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.596408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.596440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.596580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.596596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.596706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.596722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.596899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.596916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.597153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.597169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.597414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.597446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.597625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.597657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.597843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.597902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.598022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.598038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.598144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.598159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.598369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.598385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.598488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.598504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.598686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.598718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.598854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.598887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.599194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.599226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.599469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.599501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.599727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.599759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.600026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.600042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.600222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.600238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.600478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.600510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.600696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.600733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.600915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.600949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.601213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.601244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.601514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.601559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.601667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.601683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.601923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.601956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.602220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.602252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.602566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.602598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.602780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.602796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.603011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.603044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.869 [2024-12-16 06:04:39.603310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.869 [2024-12-16 06:04:39.603342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.869 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.603603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.603620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.603771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.603786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.603963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.603980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.604092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.604108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.604203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.604218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.604377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.604393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.604563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.604579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.604788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.604804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.604909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.604925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.605085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.605101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.605246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.605262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.605505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.605520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.605748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.605763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.606049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.606083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.606271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.606303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.606580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.606612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.606893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.606932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.607117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.607149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.607393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.607425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.607633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.607666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.607932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.607965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.608161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.608192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.608406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.608439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.608704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.608736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.608919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.608936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.609174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.609206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.609426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.609458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.609696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.609712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.609877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.609910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.610099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.610131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.610449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.610522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.610748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.610783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.611007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.611043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.611343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.611376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.870 qpair failed and we were unable to recover it. 00:36:05.870 [2024-12-16 06:04:39.611637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.870 [2024-12-16 06:04:39.611669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.611872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.611906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.612086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.612119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.612305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.612336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.612624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.612656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.612903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.612937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.613199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.613231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.613502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.613534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.613825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.613864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.614128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.614169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.614393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.614424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.614620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.614636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.614823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.614862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.615155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.615187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.615477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.615509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.615775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.615807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.616022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.616056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.616282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.616314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.616453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.616485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.616771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.616804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.617053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.617070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.617244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.617260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.617483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.617515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.617795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.617827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.618058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.618074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.618184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.618200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.618423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.618439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.618665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.618681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.618785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.618801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.619064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.619097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.619344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.619375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.619567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.619599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.619792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.619808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.620013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.620029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.620194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.620225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.620425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.620457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.620779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.620819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.620991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.621008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.621184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.621216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.871 [2024-12-16 06:04:39.621464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.871 [2024-12-16 06:04:39.621497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.871 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.621789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.621826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.622092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.622110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.622282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.622299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.622466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.622482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.622643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.622675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.622950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.622983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.623176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.623208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.623319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.623350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.623579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.623612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.623818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.623865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.624013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.624045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.624293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.624325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.624474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.624506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.624778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.624810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.625086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.625102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.625341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.625380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.625656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.625687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.625950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.625984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.626127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.626160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.626428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.626460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.626668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.626684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.626857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.626890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.627083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.627115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.627316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.627353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.627545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.627587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.627839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.627895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.628049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.628081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.628350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.628382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.628661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.628693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.628978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.629013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.629214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.629230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.629441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.629456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.629643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.629658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.629806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.629821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.630021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.630054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.630203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.630235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.630505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.630580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.630837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.630865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.872 [2024-12-16 06:04:39.630977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.872 [2024-12-16 06:04:39.630993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.872 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.631210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.631242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.631442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.631475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.631750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.631782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.632083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.632117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.632378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.632409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.632710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.632743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.633020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.633036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.633287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.633319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.633516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.633548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.633794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.633827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.634107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.634140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3579357 Killed "${NVMF_APP[@]}" "$@" 00:36:05.873 [2024-12-16 06:04:39.634402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.634435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.634626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.634658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.634923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.634940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.635161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.635177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:05.873 [2024-12-16 06:04:39.635342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.635359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.635590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.635609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:05.873 [2024-12-16 06:04:39.635851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.635868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:05.873 [2024-12-16 06:04:39.636058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.636075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.873 [2024-12-16 06:04:39.636350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.636367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.636591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.636607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.636765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.636782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.636947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.636964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.637205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.637221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.637371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.637386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.637487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.637503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.637741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.637756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.637995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.638012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.638174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.638190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.638408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.638424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.638651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.638667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.638933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.638950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.639117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.639133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.639368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.639384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.639597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.639613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.639858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.639877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.873 [2024-12-16 06:04:39.640042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.873 [2024-12-16 06:04:39.640058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.873 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.640304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.640320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.640489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.640506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.640653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.640669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.640884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.640900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.641119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.641135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.641348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.641364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.641624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.641640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.641808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.641824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.641994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.642011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.642168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.642184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.642333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.642349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.642565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.642581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.642767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.642784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.643019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.643036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3580056 00:36:05.874 [2024-12-16 06:04:39.643148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.643164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.643329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.643345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3580056 00:36:05.874 [2024-12-16 06:04:39.643511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.643528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:05.874 [2024-12-16 06:04:39.643685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.643702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.643782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3580056 ']' 00:36:05.874 [2024-12-16 06:04:39.643798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.644022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.644038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.874 [2024-12-16 06:04:39.644137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.644153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.644367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.644384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:05.874 [2024-12-16 06:04:39.644645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.644665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.874 [2024-12-16 06:04:39.644827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.644844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:05.874 [2024-12-16 06:04:39.645033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.645050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.645284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.874 [2024-12-16 06:04:39.645301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.645450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.645467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.645632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.645649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.645801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.645818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.645988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.646005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.646152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.646167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.646257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.646273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.646440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.646455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.646581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.646597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.874 [2024-12-16 06:04:39.646752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.874 [2024-12-16 06:04:39.646771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.874 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.647056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.647073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.647229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.647245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.647404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.647423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.647665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.647682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.647863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.647880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.648125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.648140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.648310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.648327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.648511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.648527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.648766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.648782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.648999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.649016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.649177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.649193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.649343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.649360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.649463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.649479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.649722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.649739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.649989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.650008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.650286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.650302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.650399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.650415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.650651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.650668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.650797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.650813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.650932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.650950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.651052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.651068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.651303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.651320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.651470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.651486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.651596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.651612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.651765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.651781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.651950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.651968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.652116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.652135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.652226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.652242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.652457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.652474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.652716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.652733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.652968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.652986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.653135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.653151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.653268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.653284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.653445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.653462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.653695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.875 [2024-12-16 06:04:39.653712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.875 qpair failed and we were unable to recover it. 00:36:05.875 [2024-12-16 06:04:39.653937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.653954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.654118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.654134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.654367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.654384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.654607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.654622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.654881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.654898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.655091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.655133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.655398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.655438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.655664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.655683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.655905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.655923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.656160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.656176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.656344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.656360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.656598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.656614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.656832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.656853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.657035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.657052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.657166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.657181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.657348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.657364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.657547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.657563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.657796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.657811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.657974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.657997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.658229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.658247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.658347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.658363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.658481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.658496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.658718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.658735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.658906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.658924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.659143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.659158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.659406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.659422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.659584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.659601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.659755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.659771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.659947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.659963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.660079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.660095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.660255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.660271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.660497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.660513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.660738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.660754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.661015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.661032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.661211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.661227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.661452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.661468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.661632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.661649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.661745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.661761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.662003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.876 [2024-12-16 06:04:39.662020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.876 qpair failed and we were unable to recover it. 00:36:05.876 [2024-12-16 06:04:39.662212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.662228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.662504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.662520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.662679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.662695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.662782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.662800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.662913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.662930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.663024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.663041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.663216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.663235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.663403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.663419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.663653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.663669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.663914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.663932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.664163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.664178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.664281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.664296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.664468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.664484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.664641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.664659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.664808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.664824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.665017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.665034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.665135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.665150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.665317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.665332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.665524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.665540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.665761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.665776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.666020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.666036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:05.877 [2024-12-16 06:04:39.666208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.877 [2024-12-16 06:04:39.666226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:05.877 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.666534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.666550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.666693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.666710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.666873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.666889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.667122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.667140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.667238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.667254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.667465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.667481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.667638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.667654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.667863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.667880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.667969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.667985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.668223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.668238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.668407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.668422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.668521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.668540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.668698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.668713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.668920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.160 [2024-12-16 06:04:39.668937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.160 qpair failed and we were unable to recover it. 00:36:06.160 [2024-12-16 06:04:39.669095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.669111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.669197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.669212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.669460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.669475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.669632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.669648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.669861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.669878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.670115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.670131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.670220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.670235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.670455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.670472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.670673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.670689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.670875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.670891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.671102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.671118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.671379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.671395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.671639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.671654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.671875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.671891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.672036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.672052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.672210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.672225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.672325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.672341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.672509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.672524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.672775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.672790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.672973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.672990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.673272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.673288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.673431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.673446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.673685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.673701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.673913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.673929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.674028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.674053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.674288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.674305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.674414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.674430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.674606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.674621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.674860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.674876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.675092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.675108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.675366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.675381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.675546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.675561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.675723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.675739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.675842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.675861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.676024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.676041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.676213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.676228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.676324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.676339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.676517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.676533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.676645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.676664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.676741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.676756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.676843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.676864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.677108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.677124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.161 [2024-12-16 06:04:39.677215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.161 [2024-12-16 06:04:39.677230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.161 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.677371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.677386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.677497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.677513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.677698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.677713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.677883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.677900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.678054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.678070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.678216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.678231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.678314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.678329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.678490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.678506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.678693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.678712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.678871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.678887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.678981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.678997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.679177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.679193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.679279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.679294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.679453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.679468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.679649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.679665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.679759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.679774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.679924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.679940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.680036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.680051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.680213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.680228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.680335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.680350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.680435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.680450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.680531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.680546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.680647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.680663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.680897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.680913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.681076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.681091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.681250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.681266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.681408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.681424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.681568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.681584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.681793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.681809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.681900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.681916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.682059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.682074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.682172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.682188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.682345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.682360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.682506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.682522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.682629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.682644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.682807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.682828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.682993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.683010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.683176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.683191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.683385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.683401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.683556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.683571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.683657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.683672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.683902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.162 [2024-12-16 06:04:39.683919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.162 qpair failed and we were unable to recover it. 00:36:06.162 [2024-12-16 06:04:39.684011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.684027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.684183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.684199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.684288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.684304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.684394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.684410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.684566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.684581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.684668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.684684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.684777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.684792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.685007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.685023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.685131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.685146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.685222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.685237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.685387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.685402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.685542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.685558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.685720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.685735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.685892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.685909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.686076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.686092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.686249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.686265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.686358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.686373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.686523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.686538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.686643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.686659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.686811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.686827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.687044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.687063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.687209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.687225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.687387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.687402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.687550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.687565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.687730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.687745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.687831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.687850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.688012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.688028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.688125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.688142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.688303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.688319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.688481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.688497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.688581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.688596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.688804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.688820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.688984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.689001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.689097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.689112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.689343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.689358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.689432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.689448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.689545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.689561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.689795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.689811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.689910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.689927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.690035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.690050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.690195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.690211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.690324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.690340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.163 [2024-12-16 06:04:39.690536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.163 [2024-12-16 06:04:39.690552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.163 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.690738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.690754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.690902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.690919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.691088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.691103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.691278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.691294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.691384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.691399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.691582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.691598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.691751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.691766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.691842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.691862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.692067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.692084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.692322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.692338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.692418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.692433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.692581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.692597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.692674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.692690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.692831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.692851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.692959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.692974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.693155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.693171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.693325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.693341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.693502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.693517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.693669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.693687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.693856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.693872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.694025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.694042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.694116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.694132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.694290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.694307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.694473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.694488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.694634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.694650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.694861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.694878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.695018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.695034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.695126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.695142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.695357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.695372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.695588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.695603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.695814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.695830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.695935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.695951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.696110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.696126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.696316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.696332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.696482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.696498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.696645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.164 [2024-12-16 06:04:39.696660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.164 qpair failed and we were unable to recover it. 00:36:06.164 [2024-12-16 06:04:39.696816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.696832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.697004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.697028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.697267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.697279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.697434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.697445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.697706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.697717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.697819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.697830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.697985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.697997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.698147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.698158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.698247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.698258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.698411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.698427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.698523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.698535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.698753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.698764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.698856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.698868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.699110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.699122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.699329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.699341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.699432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.699443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.699653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.699665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.699800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.699811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.699962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.699974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.700117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.700130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.700320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.700331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.700543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.700554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.700579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:06.165 [2024-12-16 06:04:39.700632] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.165 [2024-12-16 06:04:39.700658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.700677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.700779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.700794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.701028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.701043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.701208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.701221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.701445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.701457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.701546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.701558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.701739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.701751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.701911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.701923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.702022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.702034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.702119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.702131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.702287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.702299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.702500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.702513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.702599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.702611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.702765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.702778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.702948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.702962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.703103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.703115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.703333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.703361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.703448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.703461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.703537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.165 [2024-12-16 06:04:39.703549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.165 qpair failed and we were unable to recover it. 00:36:06.165 [2024-12-16 06:04:39.703694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.703706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.703854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.703866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.704063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.704077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.704159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.704171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.704379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.704391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.704544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.704556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.704748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.704760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.704902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.704915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.705909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.705920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.706980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.706991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.707131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.707142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.707235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.707246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.707446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.707457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.707680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.707691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.707779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.707791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.707968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.707980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.708065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.708075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.708278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.708290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.708393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.708404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.708573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.708585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.708809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.708820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.708906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.708917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.709152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.709163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.709313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.709324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.709431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.709443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.709605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.709616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.709722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.166 [2024-12-16 06:04:39.709733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.166 qpair failed and we were unable to recover it. 00:36:06.166 [2024-12-16 06:04:39.709914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.709926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.710917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.710929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.711983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.711994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.712172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.712185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.712317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.712331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.712486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.712498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.712646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.712658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.712808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.712819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.712982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.712994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.713066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.713092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.713264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.713276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.713495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.713507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.713663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.713675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.713879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.713891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.714091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.714102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.714192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.714203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.714370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.714383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.714481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.714493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.714632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.714644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.714799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.714810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.714895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.714919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.715063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.715073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.715220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.715232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.715390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.715401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.715500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.715511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.715603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.715614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.715777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.715789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.715941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.167 [2024-12-16 06:04:39.715953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.167 qpair failed and we were unable to recover it. 00:36:06.167 [2024-12-16 06:04:39.716117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.716128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.716211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.716223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.716329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.716341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.716427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.716439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.716531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.716542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.716637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.716648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.716881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.716894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.717031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.717043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.717249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.717261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.717483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.717495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.717590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.717602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.717642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2ad30 (9): Bad file descriptor 00:36:06.168 [2024-12-16 06:04:39.717879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.717908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.718076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.718096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.718259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.718274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.718438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.718454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.718625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.718640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.718752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.718769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.718869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.718885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.718979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.718995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.719162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.719178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.719321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.719337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.719482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.719497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.719568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.719583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.719742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.719757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.719926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.719943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.720095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.720111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.720276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.720292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.720372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.720388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.720480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.720497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.720595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.720614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.720775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.720790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.720934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.720950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.721160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.721176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.721279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.721294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.721450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.721465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.721625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.721641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.168 qpair failed and we were unable to recover it. 00:36:06.168 [2024-12-16 06:04:39.721834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.168 [2024-12-16 06:04:39.721855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.722109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.722125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.722333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.722349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.722589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.722606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.722747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.722763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.722858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.722875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.723032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.723048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.723259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.723275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.723440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.723456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.723554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.723569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.723650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.723666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.723763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.723778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.723994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.724011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.724170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.724186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.724417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.724433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.724594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.724610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.724771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.724787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.724876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.724893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.724994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.725180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.725306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.725411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.725511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.725684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.725784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.725942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.725958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.726105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.726121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.726304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.726321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.726484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.726501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.726670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.726686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.726831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.726852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.727120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.727137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.727316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.727332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.727555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.727581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.727829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.727858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.728064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.728081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.728293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.728306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.728525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.728538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.728746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.728758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.728859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.728871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.729021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.729034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.729134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.729146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.169 [2024-12-16 06:04:39.729226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.169 [2024-12-16 06:04:39.729239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.169 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.729340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.729352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.729579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.729591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.729751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.729762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.729855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.729866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.730082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.730095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.730174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.730185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.730275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.730288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.730447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.730458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.730664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.730676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.730887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.730901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.730989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.731001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.731127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.731138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.731216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.731227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.731380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.731391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.731620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.731631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.731799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.731810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.731900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.731912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.732003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.732021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.732178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.732193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.732443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.732458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.732616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.732631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.732726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.732743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.732895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.732912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.733144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.733161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.733310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.733326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.733398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.733413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.733519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.733534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.733711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.733727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.733824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.733840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.734010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.734025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.734114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.734133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.734223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.734238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.734388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.734404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.734503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.734518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.734657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.734673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.734834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.734853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.735075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.735091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.735196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.735211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.735303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.735319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.735499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.735514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.735658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.735673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.170 [2024-12-16 06:04:39.735836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.170 [2024-12-16 06:04:39.735856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.170 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.736012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.736028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.736235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.736250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.736451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.736467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.736645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.736660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.736811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.736827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.736986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.737002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.737156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.737172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.737312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.737328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.737487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.737502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.737643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.737658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.737766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.737782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.737989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.738005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.738097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.738112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.738270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.738285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.738489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.738504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.738657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.738678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.738869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.738888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.738989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.739005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.739171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.739187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.739395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.739410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.739560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.739575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.739670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.739685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.739840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.739860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.740003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.740019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.740223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.740238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.740319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.740335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.740493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.740509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.740598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.740613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.740826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.740853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.741018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.741034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.741133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.741148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.741256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.741271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.741416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.741431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.741666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.741681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.741755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.741777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.741867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.741883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.742046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.742061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.742209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.742224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.742316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.742332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.742479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.742496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.742687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.171 [2024-12-16 06:04:39.742702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.171 qpair failed and we were unable to recover it. 00:36:06.171 [2024-12-16 06:04:39.742928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.742944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.743100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.743116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.743299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.743314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.743422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.743438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.743647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.743662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.743772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.743787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.743888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.743904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.744045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.744060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.744243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.744258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.744475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.744491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.744572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.744588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.744730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.744746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.744832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.744860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.744940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.744956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.745123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.745142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.745301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.745317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.745413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.745429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.745525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.745541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.745774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.745791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.745882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.745897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.745981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.745997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.746085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.746100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.746243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.746259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.746411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.746426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.746569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.746585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.746795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.746811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.747024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.747040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.747198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.747213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.747471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.747486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.747738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.747754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.747914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.747930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.748082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.748098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.748264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.748280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.748440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.748455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.748602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.748618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.748767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.748783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.748952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.748968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.749220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.749237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.749413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.749428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.749688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.749704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.749879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.749896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.750129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.750147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.172 qpair failed and we were unable to recover it. 00:36:06.172 [2024-12-16 06:04:39.750383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.172 [2024-12-16 06:04:39.750399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.750487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.750503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.750676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.750692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.750853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.750869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.751108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.751124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.751331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.751347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.751493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.751509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.751738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.751754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.751987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.752003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.752254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.752270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.752498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.752514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.752665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.752680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.752780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.752796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.753052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.753068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.753282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.753298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.753383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.753397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.753603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.753619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.753826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.753842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.754018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.754033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.754270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.754285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.754537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.754552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.754703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.754718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.754877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.754894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.755056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.755071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.755221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.755237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.755453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.755469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.755632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.755647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.755804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.755819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.755912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.755929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.756072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.756088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.756260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.756276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.756490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.756506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.756605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.756620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.756777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.756792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.756943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.756960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.757057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.757072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.757296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.757311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.757567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.757582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.757815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.757830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.758056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.758072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.758260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.758278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.758458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.758476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.758636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.758651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.758829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.758844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.173 qpair failed and we were unable to recover it. 00:36:06.173 [2024-12-16 06:04:39.759007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.173 [2024-12-16 06:04:39.759023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.759174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.759189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.759420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.759436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.759595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.759610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.759766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.759782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.759937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.759954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.760112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.760128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.760358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.760374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.760578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.760593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.760844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.760864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.761027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.761043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.761286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.761301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.761471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.761487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.761770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.761785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.761939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.761955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.762205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.762221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.762450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.762465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.762639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.762655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.762835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.762855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.763063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.763079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.763341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.763357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.763499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.763514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.763597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.763613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.763826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.763841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.764002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.764018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.764240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.764256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.764419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.764434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.764662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.764677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.764911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.764928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.765017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.765033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.765211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.765227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.765369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.765384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.765613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.765629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.765862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.765878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.766034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.766050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.766220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.766236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.766442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.766461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.766602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.766618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.766842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.766863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.767062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.767077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.767236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.767252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.767471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.767486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.767631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.767646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.767886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.767903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.768083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.768099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.174 [2024-12-16 06:04:39.768271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.174 [2024-12-16 06:04:39.768287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.174 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.768515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.768531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.768738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.768753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.768934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.768950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.769140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.769156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.769401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.769417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.769600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.769616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.769702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.769717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.769876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.769892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.770044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.770059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.770215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.770230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.770408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.770423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.770574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.770590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.770740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.770755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.770993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.771009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.771239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.771255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.771486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.771501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.771659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.771674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.771906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.771922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.772012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.772028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.772134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.772149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.772299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.772315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.772519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.772534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.772768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.772783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.773058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.773075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.773282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.773297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.773476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.773491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.773647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.773663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.773872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.773888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.774094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.774110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.774340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.774355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.774563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.774581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.774802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.774817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.775046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.775063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.775273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.775288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.775537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.775553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.775703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.775719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.775932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.775948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.776028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.776043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.776190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.776205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.776457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.776473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.776561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.776577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.776742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.776758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.776917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.776934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.777182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.777198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.777431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.777447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.777606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.777621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.777804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.175 [2024-12-16 06:04:39.777820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.175 qpair failed and we were unable to recover it. 00:36:06.175 [2024-12-16 06:04:39.777983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.777999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.778236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.778251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.778481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.778497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.778738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.778754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.778924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.778940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.779002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.176 [2024-12-16 06:04:39.779031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.779046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.779299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.779314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.779501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.779518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.779672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.779687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.779915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.779931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.780190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.780206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.780411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.780427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.780581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.780597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.780830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.780851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.781011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.781028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.781229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.781245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.781404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.781420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.781635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.781651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.781754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.781770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.781975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.781992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.782248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.782264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.782474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.782490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.782725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.782741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.783010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.783027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.783258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.783274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.783451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.783467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.783569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.783585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.783793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.783810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.783902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.783918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.784174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.784191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.784419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.784435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.784663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.784679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.784844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.784864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.785012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.785029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.785168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.785184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.785414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.785431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.785577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.785596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.785748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.785764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.785908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.785925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.786087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.786104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.786340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.786356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.786513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.786530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.786761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.786779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.176 [2024-12-16 06:04:39.787025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.176 [2024-12-16 06:04:39.787043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.176 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.787279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.787295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.787501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.787518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.787684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.787700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.787935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.787953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.788129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.788146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.788307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.788324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.788506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.788523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.788752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.788770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.788931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.788949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.789131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.789147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.789295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.789310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.789567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.789582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.789669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.789685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.789777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.789793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.789893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.789909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.790067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.790083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.790291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.790307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.790460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.790476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.790626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.790642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.790861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.790877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.791111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.791128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.791308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.791324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.791518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.791534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.791765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.791781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.791869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.791885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.792034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.792050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.792231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.792247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.792473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.792489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.792651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.792666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.792849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.792865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.793071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.793088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.793251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.793267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.793524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.793545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.793725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.793741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.793981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.793998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.794157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.794173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.794402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.794418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.794563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.794578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.794684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.794700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.794913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.794930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.795076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.795092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.795300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.795315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.795464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.795480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.795621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.795637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.795872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.795888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.796098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.796114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.796374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.796390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.796566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.796582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.796743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.796759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.796990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.797007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.177 [2024-12-16 06:04:39.797225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.177 [2024-12-16 06:04:39.797241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.177 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.797476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.797495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.797707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.797728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.797937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.797958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.798113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.798130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.798360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.798378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.798589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.798606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.798866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.798885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.799028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.799044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.799204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.799222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.799477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.799494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.799661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.799678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.799833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.799853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.800020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.800036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.800257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.800273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.800425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.800442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.800534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.800550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.800784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.800800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.801043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.801059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.801211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.801228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.801390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.801405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.801547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.801562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.801710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.801730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.801962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.801980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.802193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.802209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.802373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.802389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.802615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.802631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.802782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.802798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.803028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.803046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.803221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.803237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.803403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.803419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.803502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.803518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.803783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.803800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.804123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.804140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.804403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.804421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.804629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.804645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.804883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.804899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.805068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.805085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.805297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.805313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.805499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.805515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.805697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.805713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.805803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.805820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.806013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.806029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.806266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.806283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.806526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.806542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.806718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.806734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.806966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.806984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.807181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.807197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.807470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.807486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.807693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.807728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.807851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.807875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.808030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.808046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.178 [2024-12-16 06:04:39.808254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.178 [2024-12-16 06:04:39.808269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.178 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.808497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.808513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.808745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.808761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.808971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.808987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.809256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.809272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.809479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.809494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.809749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.809764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.809941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.809957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.810132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.810148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.810301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.810317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.810542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.810562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.810819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.810835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.810992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.811009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.811240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.811256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.811486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.811501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.811656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.811672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.811852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.811868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.812099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.812115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.812321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.812337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.812532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.812547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.812697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.812713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.812962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.812979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.813224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.813239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.813477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.813493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.813725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.813741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.813841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.813861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.814092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.814107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.814342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.814358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.814522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.814538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.814769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.814784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.815018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.815035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.815123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.815139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.815295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.815311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.815461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.815477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.815683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.815700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.815941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.815959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.816139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.816155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.816345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.816367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.816607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.816623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.816862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.816879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.817116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.817132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.817369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.817385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.817536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.817552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.817704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.817720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.817990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.818010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.818311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.818330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.818513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.818531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.818708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.818724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.818965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.818984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.819224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.819240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.819321] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.179 [2024-12-16 06:04:39.819347] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.179 [2024-12-16 06:04:39.819358] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.179 [2024-12-16 06:04:39.819364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.179 [2024-12-16 06:04:39.819369] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.179 [2024-12-16 06:04:39.819460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.179 [2024-12-16 06:04:39.819476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.179 qpair failed and we were unable to recover it. 00:36:06.179 [2024-12-16 06:04:39.819462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:06.180 [2024-12-16 06:04:39.819570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:06.180 [2024-12-16 06:04:39.819673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:06.180 [2024-12-16 06:04:39.819686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.819702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.819674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:36:06.180 [2024-12-16 06:04:39.819890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.819907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.820140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.820157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.820320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.820336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.820517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.820533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.820690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.820706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.820965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.820984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.821143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.821159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.821367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.821384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.821525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.821540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.821787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.821808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.822001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.822018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.822251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.822267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.822422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.822438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.822619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.822635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.822797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.822812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.823068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.823084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.823243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.823259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.823417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.823432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.823640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.823655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.823875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.823891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.824072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.824088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.824243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.824260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.824469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.824491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.824658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.824673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.824908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.824925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.825135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.825151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.825412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.825428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.825586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.825602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.825762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.825778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.826034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.826053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.826291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.826308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.826524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.826541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.826652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.826668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.826914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.826932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.827085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.827101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.827327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.827343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.827568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.827586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.827750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.827766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.827998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.828015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.828165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.828182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.828357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.828373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.828528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.828545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.828691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.828707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.828947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.828964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.829072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.180 [2024-12-16 06:04:39.829087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.180 qpair failed and we were unable to recover it. 00:36:06.180 [2024-12-16 06:04:39.829256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.829271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.829446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.829462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.829639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.829655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.829864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.829881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.830151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.830178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.830482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.830504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.830648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.830664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.830872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.830888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.831040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.831056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.831258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.831274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.831492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.831507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.831736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.831752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.831853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.831869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.832099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.832115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.832346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.832362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.832539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.832555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.832659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.832674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.832901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.832922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.833163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.833180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.833414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.833429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.833538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.833554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.833698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.833714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.833856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.833871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.834024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.834040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.834221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.834238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.834393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.834408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.834643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.834658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.834893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.834910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.835068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.835084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.835268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.835283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.835468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.835484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.835683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.835699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.835852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.835868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.836093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.836108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.836299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.836314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.836525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.836541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.836749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.836765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.836945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.836962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.837196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.837211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.837361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.837378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.837551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.837567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.837781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.837796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.837955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.837972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.838170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.838187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.838348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.838365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.838517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.838530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.838781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.838793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.839017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.839030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.839203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.839215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.839443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.839456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.839621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.839633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.839856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.839869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.840120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.840133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.840293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.840306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.181 [2024-12-16 06:04:39.840508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.181 [2024-12-16 06:04:39.840520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.181 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.840746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.840758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.840991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.841003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.841263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.841281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.841505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.841517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.841619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.841631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.841859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.841871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.842079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.842092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.842191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.842204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.842357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.842369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.842543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.842555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.842807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.842820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.843007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.843020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.843166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.843179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.843336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.843348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.843550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.843562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.843751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.843764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.843855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.843868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.844089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.844102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.844327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.844341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.844585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.844597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.844755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.844767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.844946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.844960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.845205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.845218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.845387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.845400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.845624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.845638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.845796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.845808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.846032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.846045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.846253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.846266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.846468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.846481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.846676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.846707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.846933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.846952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.847127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.847144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.847301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.847316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.847474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.847490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.847694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.847710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.847807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.847822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.847925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.847941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.848171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.848189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.848346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.848363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.848572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.848590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.848738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.848755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.848998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.849015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.849173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.849189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.849388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.849405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.849561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.849577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.849750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.849767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.849987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.850004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.850177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.850193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.850432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.850448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.850686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.850702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.850861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.850879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.851108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.851124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.851212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.851228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.851481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.851500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.851641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.182 [2024-12-16 06:04:39.851658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.182 qpair failed and we were unable to recover it. 00:36:06.182 [2024-12-16 06:04:39.851933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.851951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.852179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.852202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.852362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.852380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.852531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.852549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.852803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.852822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.853062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.853085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.853296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.853315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.853521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.853538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.853719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.853736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.853970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.853988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.854169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.854185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.854392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.854408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.854584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.854600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.854807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.854824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.855019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.855036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.855251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.855268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.855424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.855439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.855653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.855669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.855760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.855776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.855860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.855876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.856037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.856052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.856280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.856297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.856457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.856473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.856738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.856753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.856963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.856980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.857151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.857166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.857395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.857411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.857589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.857604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.857796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.857811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.858056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.858073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.858247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.858263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.858492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.858507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.858757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.858772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.858949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.858965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.859124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.859140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.859348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.859364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.859501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.859517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.859673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.859688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.859842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.859866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.860025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.860042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.860290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.860306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.860540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.860556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.860753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.860782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.860954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.860973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.861223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.861239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.861426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.861441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.861612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.861627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.861859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.861876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.183 [2024-12-16 06:04:39.861974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.183 [2024-12-16 06:04:39.861990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.183 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.862161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.862177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.862335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.862351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.862592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.862608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.862788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.862803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.862958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.862974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.863215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.863231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.863436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.863457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.863641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.863658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.863876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.863893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.864125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.864142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.864401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.864417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.864638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.864654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.864862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.864878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.865042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.865058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.865294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.865311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.865463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.865479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.865702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.865719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.865969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.865988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.866150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.866168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.866332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.866350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.866513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.866530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.866737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.866757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.866936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.866955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.867140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.867159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.867313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.867330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.867484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.867501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.867592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.867608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.867716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.867731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.867894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.867912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.868144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.868161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.868433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.868451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.868611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.868628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.868720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.868736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.868936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.868967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.869225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.869237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.869473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.869485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.869634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.869645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.869807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.869818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.869974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.869987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.870181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.870193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.870395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.870407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.870605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.870618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.870825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.870837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.870947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.870959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.871119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.871130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.871358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.871370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.871534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.871545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.871751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.871763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.871987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.872000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.872226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.872237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.872408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.872419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.872648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.872660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.872888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.872901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.873061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.873073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.184 [2024-12-16 06:04:39.873326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.184 [2024-12-16 06:04:39.873338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.184 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.873551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.873565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.873720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.873734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.873900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.873912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.874117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.874130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.874366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.874380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.874607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.874621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.874800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.874814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.875043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.875060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.875315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.875330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.875558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.875571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.875723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.875736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.875937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.875952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.876109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.876123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.876299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.876312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.876559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.876570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.876748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.876760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.876897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.876909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.877142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.877153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.877355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.877370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.877601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.877612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.877789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.877800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.878004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.878016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.878256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.878267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.878493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.878504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.878654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.878666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.878820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.878832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.879113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.879141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.879327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.879342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.879571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.879587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.879792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.879808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.879957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.879973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.880196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.880211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.880372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.880387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.880491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.880506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.880718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.880734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.880945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.880962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.881191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.881207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.881429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.881444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.881625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.881641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.881799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.881815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.882071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.882087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.882299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.882314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.882472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.882488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.882696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.882711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.882936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.882952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.883228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.883244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.883450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.883465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.883646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.883662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.883888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.883904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.884136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.884152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.884293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.884308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.884539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.884554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.884706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.884722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.884878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.884894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.885049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.885064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.885271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.885287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.885438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.185 [2024-12-16 06:04:39.885453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-12-16 06:04:39.885699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.885714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.885943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.885962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.886132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.886147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.886322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.886338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.886569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.886584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.886782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.886797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.886954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.886971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.887112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.887127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.887279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.887294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.887448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.887464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.887714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.887729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.887981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.887998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.888157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.888173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.888314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.888330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.888580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.888598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.888698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.888713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.888944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.888961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.889170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.889185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.889388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.889403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.889553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.889569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.889814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.889829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.890066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.890082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.890232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.890247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.890425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.890440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.890682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.890698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.890935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.890951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.891158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.891174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.891393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.891409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.891565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.891581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.891812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.891827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.892052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.892069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.892326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.892341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.892547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.892563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.892770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.892785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.892952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.892969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.893125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.893141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.893241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.893257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.893479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.893495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.893703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.893719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.893816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.893832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.894045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.894061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.894203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.894222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.894297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.894312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.894568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.894584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.894667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.894682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.894898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.894914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.895142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.895157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.895413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.895428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.895606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.895621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.895800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.895816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.896050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.896067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.896300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.896316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.896559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.896574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-12-16 06:04:39.896727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.186 [2024-12-16 06:04:39.896743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.896922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.896938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.897090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.897105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.897193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.897208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.897367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.897383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.897540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.897555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.897703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.897718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.897952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.897968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.898126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.898141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.898348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.898363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.898599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.898614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.898831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.898850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.899015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.899031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.899241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.899256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.899490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.899505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.899647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.899663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.899870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.899886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.900119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.900135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.900304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.900319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.900482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.900498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.900704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.900720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.900805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.900821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.900984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.901000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.901234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.901249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.901522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.901538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.901789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.901805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.902011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.902027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.902282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.902298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.902534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.902552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.902796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.902811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.903042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.903059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.903233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.903248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.903444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.903460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.903643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.903659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.903740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.903755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.903843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.903862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.903953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.903969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.904216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.904232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.904508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.904523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.904679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.904694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.904856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.904872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.905029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.905045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.905206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.905222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.905403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.905419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.905579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.905595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.905808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.905824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.905997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.906013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.906165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.906181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.906341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.906356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.906467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.906482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.906751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.906767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.906922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.906938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.907085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.907101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.907298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.907313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.907498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.907513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.907763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.907778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.908002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.908018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.908173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.908188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-12-16 06:04:39.908344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.187 [2024-12-16 06:04:39.908360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.908566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.908581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.908783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.908798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.909013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.909029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.909238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.909254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.909488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.909503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.909717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.909732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.909967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.910005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.910249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.910264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.910372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.910387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.910543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.910562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.910742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.910758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.910992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.911009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.911169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.911185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.911426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.911442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.911582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.911597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.911747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.911762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.911935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.911951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.912056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.912072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.912303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.912319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.912496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.912511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.912682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.912697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.912932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.912948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.913104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.913120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.913330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.913345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.913572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.913588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.913766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.913781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.913956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.913972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.914201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.914217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.914467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.914483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.914655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.914670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.914825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.914840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.915083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.915099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.915319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.915335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.915583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.915599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.915868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.915883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.916133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.916149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.916261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.916277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.916487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.916503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.916727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.916742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.916970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.916986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.917127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.917143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.917303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.917319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.917499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.917515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.917745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.917761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:06.188 [2024-12-16 06:04:39.917926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.917942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.918102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.918118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:06.188 [2024-12-16 06:04:39.918369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.918385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.918532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.918549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:06.188 [2024-12-16 06:04:39.918780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.918796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.918886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:06.188 [2024-12-16 06:04:39.918902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.918995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.919010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.919174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.188 [2024-12-16 06:04:39.919190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.919279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.919294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.919451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.919467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.919694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.919710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.919947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.919963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.920181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.920197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.920363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.920380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.920552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.920568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.188 qpair failed and we were unable to recover it. 00:36:06.188 [2024-12-16 06:04:39.920746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.188 [2024-12-16 06:04:39.920762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.920986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.921003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.921182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.921199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.921450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.921466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.921649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.921665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.921774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.921790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.921964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.921980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.922186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.922204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.922357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.922374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.922603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.922618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.922857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.922873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.923121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.923137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.923314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.923331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.923515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.923533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.923683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.923704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.923944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.923960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.924214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.924229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.924382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.924398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.924610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.924625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.924845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.924867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.924948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.924963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.925115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.925130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.925208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.925223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.925372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.925390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.925659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.925674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.925916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.925932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.926098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.926116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.926277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.926293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.926413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.926431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.926646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.926662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.926822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.926837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.927060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.927076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.927164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.927180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.927341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.927359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.927594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.927610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.927819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.927834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.928005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.928021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.928170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.928186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.928416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.928431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.928596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.928612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.928769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.928784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.928973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.928990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.929223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.929239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.929342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.929359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.929532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.929548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.929755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.929771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.929932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.929949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.930106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.930122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.930265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.930280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.930588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.930603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.930835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.930854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.930959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.930976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.931069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.931086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.931266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.931284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.931372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.931390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.931508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.931525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.931785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.931801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.931954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.189 [2024-12-16 06:04:39.931970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.189 qpair failed and we were unable to recover it. 00:36:06.189 [2024-12-16 06:04:39.932182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.932198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.932413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.932429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.932677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.932693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.932948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.932964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.933127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.933143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.933373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.933388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.933662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.933678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.933885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.933901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.934058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.934074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.934223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.934239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.934444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.934462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.934679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.934694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.934836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.934858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.935041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.935057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.935162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.935177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.935267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.935284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.935386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.935401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.935578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.935593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.935764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.935779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.935886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.935902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.936046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.936062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.936220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.936235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.936387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.936403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.936682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.936697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.936862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.936878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.937058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.937073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.937283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.937298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.937396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.937412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.937649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.937664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.937803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.937819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.937992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.938009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.938169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.938186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.938261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.938276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.938431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.938446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.938548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.938564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.938795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.938810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.938982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.938999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.939180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.939196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.939338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.939354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.939455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.939473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.939631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.939646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.939855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.939871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.940037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.940053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.940137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.940153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.940266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.940282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.940444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.940461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.940607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.940624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.940788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.940804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.941015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.941031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.941170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.941185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.941287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.941307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.941421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.941437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.941646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.941662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.941865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.941882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.942032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.942047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.942144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.942159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.942412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.942428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.942638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.942654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.942858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.942874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.943027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.943042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.943212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.943229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.943317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.943332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.943612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.943628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.943864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.943880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.190 [2024-12-16 06:04:39.944050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.190 [2024-12-16 06:04:39.944066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.190 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.944227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.944243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.944508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.944524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.944768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.944783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.944900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.944916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.945077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.945095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.945251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.945268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.945409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.945425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.945533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.945548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.945754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.945770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.945978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.945994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.946201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.946217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.946330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.946346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.946576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.946592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.946759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.946775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.946939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.946956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.947182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.947197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.947426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.947442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.947724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.947739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.947968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.947984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.948137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.948152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.948305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.948321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.948482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.948497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.948647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.948662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.948842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.948871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.949028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.949044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.949160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.949179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.949348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.949364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.949646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.949662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.949905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.949922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.950957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.950974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.951132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.951147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.951248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.951263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.951416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.951434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.951587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.951603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.951810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.951830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.952011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.952028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.952133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.952149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.952307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.952322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.952485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.952501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.952658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.952674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.952859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.952875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.952984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.953000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.953145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.953162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.953321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.953338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.953548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.953565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.953787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.953803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.953909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.953926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.954108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.954124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.954284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.954300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.954451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.954467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.954671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.954686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.954932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.954949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.955108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.955123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.955284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.955299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.955407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.955422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.191 [2024-12-16 06:04:39.955685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.955703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.191 qpair failed and we were unable to recover it. 00:36:06.191 [2024-12-16 06:04:39.955926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.191 [2024-12-16 06:04:39.955942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:06.192 [2024-12-16 06:04:39.956101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.956122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.956282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.956297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.192 [2024-12-16 06:04:39.956457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.956476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.956632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.956649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.192 [2024-12-16 06:04:39.956809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.956826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.956925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.956942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.957114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.957129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.957240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.957255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.957350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.957366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.957595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.957611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.957771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.957787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.957890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.957906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.958070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.958085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.958200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.958216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.958368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.958384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.958676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.958691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.958844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.958865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.959020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.959036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.959193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.959208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.959368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.959384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.959531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.959547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.959702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.959717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.959898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.959914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.960017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.960033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.960137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.960152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.960252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.960268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.960441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.960468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.960695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.960711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.960875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.960892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.961064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.961079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.961318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.961333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.961520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.961535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.961764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.961779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.961990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.962007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.962177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.962192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.962296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.962311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.962539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.962555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.962708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.962723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.962871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.962887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.963040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.963059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.963213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.963228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.963334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.963349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.963460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.963476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.963681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.963697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.963861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.963878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.963991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.964006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.964164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.964180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.964416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.964432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.964615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.964630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.964863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.964879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.965071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.965087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.965267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.965282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.965441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.965457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.965625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.965641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.965782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.965797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.965893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.965910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.966072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.966087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.966200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.966216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.192 [2024-12-16 06:04:39.966384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.192 [2024-12-16 06:04:39.966401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.192 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.966686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.966701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.966981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.966997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.967101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.967117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.967324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.967339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.967505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.967521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.967670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.967686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.967896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.967913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.968143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.968173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.968292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.968309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.968487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.968502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.968661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.968677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.968825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.968840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.969015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.969032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.969138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.969153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.969364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.969381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.969559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.969575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.969780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.969796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.970026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.970043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.970142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.970157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.970307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.970323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.970415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.970430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.970594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.970610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.970718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.970733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.970944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.970961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.971132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.971148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.971243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.971259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.971431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.971446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.971691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.971707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.971982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.971999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.972108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.972123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.972357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.972373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.972604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.972620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.972841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.972861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.972982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.972998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.973230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.973250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.973409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.973426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.973524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.973539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.973622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.973638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.973798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.973814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.974071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.974089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.974248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.974265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.974415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.974431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.974610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.974627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.974858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.974875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.975053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.975069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.975343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.975359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.975468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.975484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.975714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.975730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.975891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.975908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.976135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.976151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.976262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.976278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.976429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.976444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.976642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.976658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.976818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.976833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.977099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.977115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 Malloc0 00:36:06.193 [2024-12-16 06:04:39.977270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.977286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.977483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.977498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.977648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.977665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.977837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.977858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.193 [2024-12-16 06:04:39.978010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.978027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.978118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.978133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:06.193 [2024-12-16 06:04:39.978317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.978337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.978621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.978637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b9 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.193 0 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.978816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.978831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.193 [2024-12-16 06:04:39.979021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.979037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.979269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.193 [2024-12-16 06:04:39.979285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.193 qpair failed and we were unable to recover it. 00:36:06.193 [2024-12-16 06:04:39.979472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.979487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.979718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.979733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.979987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.980003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.980156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.980171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.980408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.980423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.980670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.980685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.980843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.980863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.981099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.981118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.981220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.981235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.981436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.981451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.981682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.981697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.981788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.981803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.981965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.981981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.982152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.982167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.982393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.982408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.982652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.982667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.982773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.982789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.982959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.982977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.983122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.983137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.983315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.983331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.983590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.983606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.983752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.983767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.983925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.983941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.984098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.984114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.984338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.984354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.984512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.984527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.984680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.984696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 [2024-12-16 06:04:39.984695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.984884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.984907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.985082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.985098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.985197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.985213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.985370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.985386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.985591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.985606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.985818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.985833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.986087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.986103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.986278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.986294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.986506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.986522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.986791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.986807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.986967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.986983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.987132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.987147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.987312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.987327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.987434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.987450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.987671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.987687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.987852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.987867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.988092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.988107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.988268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.988284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.988501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.988517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.988747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.988762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.989025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.989044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.989264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.989279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.989385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.989401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.989507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.989523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.989780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.989795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.990031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.990047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.990278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.990293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.990463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.990478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.990705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.990721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.990815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.990831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.991010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.991028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.991195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.991211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.991352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.991367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.194 [2024-12-16 06:04:39.991576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.194 [2024-12-16 06:04:39.991590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.194 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.991838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.991859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.992116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.992133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.992343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.992358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.992564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.992580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.992808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.992823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.993058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.993074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.993293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.993308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.455 [2024-12-16 06:04:39.993540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.993557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.993818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:06.455 [2024-12-16 06:04:39.993835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.994057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.994073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.455 [2024-12-16 06:04:39.994311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.994327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.455 [2024-12-16 06:04:39.994582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.994601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.994824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.994840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.995003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.995019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.995223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.995238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.995474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.995489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.995590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.995606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.995703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.995718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.995889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.995906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.996137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.996152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.996310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.996327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.996512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.996528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.996706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.996721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.996827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.996843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.997079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.997096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.997245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.997260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.997345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.997361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.997592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.997608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.997752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.997768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.998002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.998018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.998237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.998252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.998402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.998418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.998655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.998671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.998817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.998832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.455 [2024-12-16 06:04:39.999123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.455 [2024-12-16 06:04:39.999145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.455 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:39.999403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:39.999420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:39.999595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:39.999610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:39.999704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:39.999719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:39.999937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:39.999959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.000181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.000197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.000337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.000353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.000565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.000580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.000759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.000776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.000946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.000962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.001122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.001138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.001299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.001316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.456 [2024-12-16 06:04:40.001530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.001547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.001734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.001749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.001842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.001862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 06:04:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.002059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.002077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.456 [2024-12-16 06:04:40.002240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.002259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.002361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.002376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.456 [2024-12-16 06:04:40.002610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.002627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.002782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.002799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.002969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.002986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.003234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.003250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.003367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.003382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.003545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.003561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.003664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.003679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.003819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.003835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.004000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.004017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.004183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.004199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.004453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.004469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.004670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.004685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.004843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.004867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.004966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.004982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.005214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.005229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.005423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.005439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.005748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.005767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.005950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.005994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1cd90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.006180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.006251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.006566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.006598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.006720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.006735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.006925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.006944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.007162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.007234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.456 qpair failed and we were unable to recover it. 00:36:06.456 [2024-12-16 06:04:40.007403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.456 [2024-12-16 06:04:40.007435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.007636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.007651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.007883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.007902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.008050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.008073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.008185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.008201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.008307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.008332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.008499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.008526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.008796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.008810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.008972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.008985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.009073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.009085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.009172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.009184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.009335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.009347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.009424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.009436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.457 [2024-12-16 06:04:40.009593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.009606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.009858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.009871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.457 [2024-12-16 06:04:40.010057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.010070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.010161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.010183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.010382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.010393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.457 [2024-12-16 06:04:40.010544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.010555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.010722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.010734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.457 [2024-12-16 06:04:40.010885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.010896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.011054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.011065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.011162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.011172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.011368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.011379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.011599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.011610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.011700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.011711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.011789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.011800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.011961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.011973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.012138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.012149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.012246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.012257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.012504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.012514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.012715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.012726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.012972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.012984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.013132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.013143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.013314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.013325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.013594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.013606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.013800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.013811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.014038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.014049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.014255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.014265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.014503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.014513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb0000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.457 [2024-12-16 06:04:40.014799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.457 [2024-12-16 06:04:40.014826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbb8000b90 with addr=10.0.0.2, port=4420 00:36:06.457 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.015088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.015119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.015349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.015366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.015621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.015637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.015794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.015809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.016022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.016039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.016198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.016214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.016363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.016379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.016600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.016616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.016817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.458 [2024-12-16 06:04:40.016833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffbac000b90 with addr=10.0.0.2, port=4420 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.016983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.458 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.458 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:06.458 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.458 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.458 [2024-12-16 06:04:40.025479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.025580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.025615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.025627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.025636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.025664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.458 06:04:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3579385 00:36:06.458 [2024-12-16 06:04:40.035315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.035381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.035398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.035406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.035412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.035428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.045321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.045379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.045393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.045400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.045406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.045421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.055301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.055360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.055374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.055380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.055386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.055401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.065351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.065421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.065440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.065449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.065460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.065480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.075328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.075410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.075424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.075430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.075436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.075451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.085322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.085375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.085388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.085394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.085400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.085415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.095281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.095339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.095353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.095360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.095366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.095380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.105410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.105468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.105481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.105488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.105494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.105508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.115447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.115501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.115515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.115522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.115528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.458 [2024-12-16 06:04:40.115543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.458 qpair failed and we were unable to recover it. 00:36:06.458 [2024-12-16 06:04:40.125449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.458 [2024-12-16 06:04:40.125506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.458 [2024-12-16 06:04:40.125520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.458 [2024-12-16 06:04:40.125527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.458 [2024-12-16 06:04:40.125533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.125548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.135470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.135528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.135541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.135547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.135553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.135568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.145493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.145546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.145559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.145565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.145571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.145585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.155556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.155616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.155629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.155639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.155645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.155659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.165535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.165593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.165606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.165613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.165618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.165633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.175564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.175622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.175635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.175641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.175648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.175662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.185611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.185668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.185681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.185687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.185693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.185707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.195611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.195693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.195707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.195714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.195720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.195736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.205642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.205698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.205711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.205717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.205723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.205737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.215685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.215742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.215755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.215761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.215767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.215781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.225719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.225777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.225790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.225796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.225802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.225817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.235733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.235790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.235803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.235810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.235815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.235829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.245751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.245805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.245817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.245827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.459 [2024-12-16 06:04:40.245833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.459 [2024-12-16 06:04:40.245850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.459 qpair failed and we were unable to recover it. 00:36:06.459 [2024-12-16 06:04:40.255799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.459 [2024-12-16 06:04:40.255862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.459 [2024-12-16 06:04:40.255876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.459 [2024-12-16 06:04:40.255882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.460 [2024-12-16 06:04:40.255888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.460 [2024-12-16 06:04:40.255903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.460 qpair failed and we were unable to recover it. 00:36:06.460 [2024-12-16 06:04:40.265845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.460 [2024-12-16 06:04:40.265906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.460 [2024-12-16 06:04:40.265919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.460 [2024-12-16 06:04:40.265926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.460 [2024-12-16 06:04:40.265932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.460 [2024-12-16 06:04:40.265947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.460 qpair failed and we were unable to recover it. 00:36:06.460 [2024-12-16 06:04:40.275769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.460 [2024-12-16 06:04:40.275824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.460 [2024-12-16 06:04:40.275838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.460 [2024-12-16 06:04:40.275844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.460 [2024-12-16 06:04:40.275854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.460 [2024-12-16 06:04:40.275868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.460 qpair failed and we were unable to recover it. 00:36:06.460 [2024-12-16 06:04:40.285869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.460 [2024-12-16 06:04:40.285930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.460 [2024-12-16 06:04:40.285943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.460 [2024-12-16 06:04:40.285950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.460 [2024-12-16 06:04:40.285955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.460 [2024-12-16 06:04:40.285969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.460 qpair failed and we were unable to recover it. 00:36:06.460 [2024-12-16 06:04:40.295962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.460 [2024-12-16 06:04:40.296066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.460 [2024-12-16 06:04:40.296080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.460 [2024-12-16 06:04:40.296086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.460 [2024-12-16 06:04:40.296092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.460 [2024-12-16 06:04:40.296107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.460 qpair failed and we were unable to recover it. 00:36:06.460 [2024-12-16 06:04:40.305940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.460 [2024-12-16 06:04:40.306017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.460 [2024-12-16 06:04:40.306031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.460 [2024-12-16 06:04:40.306038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.460 [2024-12-16 06:04:40.306044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.460 [2024-12-16 06:04:40.306059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.460 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.315959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.316017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.316030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.316036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.316042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.316055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.325977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.326040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.326055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.326061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.326067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.326082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.336009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.336064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.336080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.336086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.336092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.336106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.346040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.346143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.346156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.346163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.346169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.346183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.356017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.356073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.356087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.356093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.356099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.356113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.366069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.366127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.366140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.366147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.366153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.366168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.376119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.376176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.376189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.376195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.376201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.376218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.386159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.386214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.386226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.386232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.386238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.386252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.396227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.396309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.396327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.719 [2024-12-16 06:04:40.396335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.719 [2024-12-16 06:04:40.396341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.719 [2024-12-16 06:04:40.396357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.719 qpair failed and we were unable to recover it. 00:36:06.719 [2024-12-16 06:04:40.406217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.719 [2024-12-16 06:04:40.406273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.719 [2024-12-16 06:04:40.406287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.406293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.406299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.406314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.416305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.416362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.416376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.416382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.416388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.416403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.426286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.426343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.426360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.426366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.426372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.426387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.436220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.436281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.436294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.436301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.436307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.436321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.446260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.446341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.446353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.446360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.446365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.446380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.456377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.456443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.456457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.456464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.456470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.456484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.466349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.466413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.466426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.466432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.466441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.466456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.476339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.476393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.476406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.476412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.476419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.476433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.486425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.486477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.486490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.486496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.486503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.486517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.496411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.496469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.496482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.496488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.496494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.496508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.506431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.506488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.506501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.506507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.506513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.506527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.516461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.516517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.516531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.516537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.516543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.516557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.526566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.526630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.526643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.526649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.526655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.526670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.536523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.720 [2024-12-16 06:04:40.536579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.720 [2024-12-16 06:04:40.536593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.720 [2024-12-16 06:04:40.536601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.720 [2024-12-16 06:04:40.536607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.720 [2024-12-16 06:04:40.536622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.720 qpair failed and we were unable to recover it. 00:36:06.720 [2024-12-16 06:04:40.546607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.721 [2024-12-16 06:04:40.546663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.721 [2024-12-16 06:04:40.546676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.721 [2024-12-16 06:04:40.546682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.721 [2024-12-16 06:04:40.546688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.721 [2024-12-16 06:04:40.546702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.721 qpair failed and we were unable to recover it. 00:36:06.721 [2024-12-16 06:04:40.556628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.721 [2024-12-16 06:04:40.556686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.721 [2024-12-16 06:04:40.556700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.721 [2024-12-16 06:04:40.556707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.721 [2024-12-16 06:04:40.556716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.721 [2024-12-16 06:04:40.556730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.721 qpair failed and we were unable to recover it. 00:36:06.721 [2024-12-16 06:04:40.566671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.721 [2024-12-16 06:04:40.566773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.721 [2024-12-16 06:04:40.566786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.721 [2024-12-16 06:04:40.566793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.721 [2024-12-16 06:04:40.566800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.721 [2024-12-16 06:04:40.566815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.721 qpair failed and we were unable to recover it. 00:36:06.980 [2024-12-16 06:04:40.576698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.980 [2024-12-16 06:04:40.576752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.980 [2024-12-16 06:04:40.576765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.980 [2024-12-16 06:04:40.576771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.980 [2024-12-16 06:04:40.576777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.980 [2024-12-16 06:04:40.576791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.980 qpair failed and we were unable to recover it. 00:36:06.980 [2024-12-16 06:04:40.586756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.980 [2024-12-16 06:04:40.586814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.980 [2024-12-16 06:04:40.586828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.980 [2024-12-16 06:04:40.586834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.980 [2024-12-16 06:04:40.586840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.980 [2024-12-16 06:04:40.586859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.980 qpair failed and we were unable to recover it. 00:36:06.980 [2024-12-16 06:04:40.596693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.980 [2024-12-16 06:04:40.596749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.980 [2024-12-16 06:04:40.596762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.980 [2024-12-16 06:04:40.596768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.980 [2024-12-16 06:04:40.596774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.980 [2024-12-16 06:04:40.596788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.980 qpair failed and we were unable to recover it. 00:36:06.980 [2024-12-16 06:04:40.606717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.980 [2024-12-16 06:04:40.606771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.980 [2024-12-16 06:04:40.606784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.980 [2024-12-16 06:04:40.606791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.980 [2024-12-16 06:04:40.606797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.980 [2024-12-16 06:04:40.606811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.980 qpair failed and we were unable to recover it. 00:36:06.980 [2024-12-16 06:04:40.616829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.980 [2024-12-16 06:04:40.616895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.980 [2024-12-16 06:04:40.616909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.980 [2024-12-16 06:04:40.616916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.980 [2024-12-16 06:04:40.616922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.980 [2024-12-16 06:04:40.616937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.980 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.626786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.626843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.626859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.626866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.626872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.626887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.636902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.636978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.636992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.636998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.637004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.637018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.646844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.646905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.646918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.646929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.646934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.646949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.656966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.657071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.657085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.657091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.657098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.657112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.666946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.667026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.667040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.667047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.667053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.667068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.676927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.676978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.676991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.676997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.677003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.677017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.686996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.687048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.687061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.687067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.687073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.687087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.697009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.697064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.697077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.697083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.697089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.697103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.707014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.707072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.707086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.707092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.707098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.707112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.717034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.717086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.717101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.717108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.717114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.717128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.727060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.727117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.727130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.727138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.727143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.727158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.737104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.737164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.737179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.737188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.737194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.737208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.747128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.747185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.747198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.747205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.747211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.981 [2024-12-16 06:04:40.747225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.981 qpair failed and we were unable to recover it. 00:36:06.981 [2024-12-16 06:04:40.757131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.981 [2024-12-16 06:04:40.757185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.981 [2024-12-16 06:04:40.757199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.981 [2024-12-16 06:04:40.757205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.981 [2024-12-16 06:04:40.757211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.757226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:06.982 [2024-12-16 06:04:40.767166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.982 [2024-12-16 06:04:40.767218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.982 [2024-12-16 06:04:40.767230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.982 [2024-12-16 06:04:40.767237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.982 [2024-12-16 06:04:40.767243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.767257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:06.982 [2024-12-16 06:04:40.777284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.982 [2024-12-16 06:04:40.777355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.982 [2024-12-16 06:04:40.777369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.982 [2024-12-16 06:04:40.777375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.982 [2024-12-16 06:04:40.777382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.777396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:06.982 [2024-12-16 06:04:40.787280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.982 [2024-12-16 06:04:40.787331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.982 [2024-12-16 06:04:40.787344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.982 [2024-12-16 06:04:40.787350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.982 [2024-12-16 06:04:40.787357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.787371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:06.982 [2024-12-16 06:04:40.797296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.982 [2024-12-16 06:04:40.797363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.982 [2024-12-16 06:04:40.797377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.982 [2024-12-16 06:04:40.797383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.982 [2024-12-16 06:04:40.797389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.797403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:06.982 [2024-12-16 06:04:40.807277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.982 [2024-12-16 06:04:40.807336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.982 [2024-12-16 06:04:40.807349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.982 [2024-12-16 06:04:40.807356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.982 [2024-12-16 06:04:40.807361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.807376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:06.982 [2024-12-16 06:04:40.817397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.982 [2024-12-16 06:04:40.817455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.982 [2024-12-16 06:04:40.817468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.982 [2024-12-16 06:04:40.817474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.982 [2024-12-16 06:04:40.817480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.817494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:06.982 [2024-12-16 06:04:40.827561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.982 [2024-12-16 06:04:40.827625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.982 [2024-12-16 06:04:40.827641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.982 [2024-12-16 06:04:40.827647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.982 [2024-12-16 06:04:40.827653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:06.982 [2024-12-16 06:04:40.827668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.982 qpair failed and we were unable to recover it. 00:36:07.243 [2024-12-16 06:04:40.837490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.243 [2024-12-16 06:04:40.837546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.243 [2024-12-16 06:04:40.837559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.243 [2024-12-16 06:04:40.837566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.243 [2024-12-16 06:04:40.837572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.243 [2024-12-16 06:04:40.837586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-12-16 06:04:40.847555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.243 [2024-12-16 06:04:40.847613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.243 [2024-12-16 06:04:40.847626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.243 [2024-12-16 06:04:40.847632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.243 [2024-12-16 06:04:40.847638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.243 [2024-12-16 06:04:40.847652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-12-16 06:04:40.857521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.243 [2024-12-16 06:04:40.857578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.243 [2024-12-16 06:04:40.857592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.243 [2024-12-16 06:04:40.857598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.243 [2024-12-16 06:04:40.857604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.243 [2024-12-16 06:04:40.857618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-12-16 06:04:40.867640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.243 [2024-12-16 06:04:40.867704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.243 [2024-12-16 06:04:40.867718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.243 [2024-12-16 06:04:40.867725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.243 [2024-12-16 06:04:40.867731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.243 [2024-12-16 06:04:40.867749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-12-16 06:04:40.877559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.243 [2024-12-16 06:04:40.877610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.243 [2024-12-16 06:04:40.877624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.243 [2024-12-16 06:04:40.877630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.243 [2024-12-16 06:04:40.877636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.243 [2024-12-16 06:04:40.877651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.887561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.887631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.887644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.887651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.887656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.887671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.897626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.897684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.897697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.897703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.897709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.897723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.907565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.907631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.907645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.907651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.907656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.907671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.917711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.917816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.917832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.917839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.917845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.917863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.927628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.927680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.927694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.927701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.927707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.927721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.937712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.937769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.937782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.937789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.937794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.937809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.947754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.947837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.947853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.947860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.947865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.947880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.957723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.957812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.957826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.957833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.957838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.957861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.967813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.244 [2024-12-16 06:04:40.967866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.244 [2024-12-16 06:04:40.967880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.244 [2024-12-16 06:04:40.967886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.244 [2024-12-16 06:04:40.967892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.244 [2024-12-16 06:04:40.967906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-12-16 06:04:40.977893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:40.977951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:40.977964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:40.977971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:40.977976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:40.977990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:40.987909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:40.987964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:40.987977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:40.987983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:40.987989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:40.988003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:40.997906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:40.997962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:40.997974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:40.997981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:40.997987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:40.998002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:41.007943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:41.008000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:41.008013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:41.008019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:41.008025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:41.008040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:41.017977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:41.018032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:41.018045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:41.018051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:41.018058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:41.018072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:41.027999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:41.028059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:41.028071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:41.028078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:41.028083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:41.028097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:41.038046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:41.038108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:41.038120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:41.038127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:41.038133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:41.038147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:41.048055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:41.048112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:41.048125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:41.048132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:41.048141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:41.048155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:41.058090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:41.058154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.245 [2024-12-16 06:04:41.058189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.245 [2024-12-16 06:04:41.058201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.245 [2024-12-16 06:04:41.058208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.245 [2024-12-16 06:04:41.058231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-12-16 06:04:41.068132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.245 [2024-12-16 06:04:41.068224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.246 [2024-12-16 06:04:41.068238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.246 [2024-12-16 06:04:41.068245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.246 [2024-12-16 06:04:41.068250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.246 [2024-12-16 06:04:41.068265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-12-16 06:04:41.078137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.246 [2024-12-16 06:04:41.078192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.246 [2024-12-16 06:04:41.078206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.246 [2024-12-16 06:04:41.078213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.246 [2024-12-16 06:04:41.078219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.246 [2024-12-16 06:04:41.078233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-12-16 06:04:41.088170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.246 [2024-12-16 06:04:41.088226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.246 [2024-12-16 06:04:41.088240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.246 [2024-12-16 06:04:41.088246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.246 [2024-12-16 06:04:41.088252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.246 [2024-12-16 06:04:41.088266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.506 [2024-12-16 06:04:41.098203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.506 [2024-12-16 06:04:41.098260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.506 [2024-12-16 06:04:41.098273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.506 [2024-12-16 06:04:41.098279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.506 [2024-12-16 06:04:41.098285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.506 [2024-12-16 06:04:41.098299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.506 qpair failed and we were unable to recover it. 00:36:07.506 [2024-12-16 06:04:41.108240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.506 [2024-12-16 06:04:41.108299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.506 [2024-12-16 06:04:41.108312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.506 [2024-12-16 06:04:41.108318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.506 [2024-12-16 06:04:41.108324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.506 [2024-12-16 06:04:41.108339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.506 qpair failed and we were unable to recover it. 00:36:07.506 [2024-12-16 06:04:41.118304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.506 [2024-12-16 06:04:41.118366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.506 [2024-12-16 06:04:41.118379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.506 [2024-12-16 06:04:41.118385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.506 [2024-12-16 06:04:41.118391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.506 [2024-12-16 06:04:41.118405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.506 qpair failed and we were unable to recover it. 00:36:07.506 [2024-12-16 06:04:41.128278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.506 [2024-12-16 06:04:41.128331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.506 [2024-12-16 06:04:41.128345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.506 [2024-12-16 06:04:41.128352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.506 [2024-12-16 06:04:41.128358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.506 [2024-12-16 06:04:41.128373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.506 qpair failed and we were unable to recover it. 00:36:07.506 [2024-12-16 06:04:41.138315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.506 [2024-12-16 06:04:41.138404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.506 [2024-12-16 06:04:41.138418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.506 [2024-12-16 06:04:41.138427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.506 [2024-12-16 06:04:41.138432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.506 [2024-12-16 06:04:41.138447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.506 qpair failed and we were unable to recover it. 00:36:07.506 [2024-12-16 06:04:41.148353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.506 [2024-12-16 06:04:41.148407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.506 [2024-12-16 06:04:41.148420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.506 [2024-12-16 06:04:41.148426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.506 [2024-12-16 06:04:41.148433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.506 [2024-12-16 06:04:41.148447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.506 qpair failed and we were unable to recover it. 00:36:07.506 [2024-12-16 06:04:41.158393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.158456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.158469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.158476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.158482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.158497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.168403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.168498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.168510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.168517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.168523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.168537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.178461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.178541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.178553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.178560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.178566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.178580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.188505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.188567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.188580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.188586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.188592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.188606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.198516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.198584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.198597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.198603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.198609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.198623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.208521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.208573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.208586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.208593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.208599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.208613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.218481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.218537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.218552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.218559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.218565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.218580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.228527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.228613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.228626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.228635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.228641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.228655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.238624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.238678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.238693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.238699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.238706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.238720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.248627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.248680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.248694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.248700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.248706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.248720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.258687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.258746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.258759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.258766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.258772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.258785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.268708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.268772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.268785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.268792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.268798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.268812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.278718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.278775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.278789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.278796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.278802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.278816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.507 [2024-12-16 06:04:41.288770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.507 [2024-12-16 06:04:41.288824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.507 [2024-12-16 06:04:41.288838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.507 [2024-12-16 06:04:41.288844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.507 [2024-12-16 06:04:41.288854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.507 [2024-12-16 06:04:41.288869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.507 qpair failed and we were unable to recover it. 00:36:07.508 [2024-12-16 06:04:41.298779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.508 [2024-12-16 06:04:41.298833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.508 [2024-12-16 06:04:41.298849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.508 [2024-12-16 06:04:41.298856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.508 [2024-12-16 06:04:41.298862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.508 [2024-12-16 06:04:41.298877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.508 qpair failed and we were unable to recover it. 00:36:07.508 [2024-12-16 06:04:41.308813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.508 [2024-12-16 06:04:41.308895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.508 [2024-12-16 06:04:41.308909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.508 [2024-12-16 06:04:41.308915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.508 [2024-12-16 06:04:41.308921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.508 [2024-12-16 06:04:41.308936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.508 qpair failed and we were unable to recover it. 00:36:07.508 [2024-12-16 06:04:41.318862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.508 [2024-12-16 06:04:41.318916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.508 [2024-12-16 06:04:41.318932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.508 [2024-12-16 06:04:41.318939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.508 [2024-12-16 06:04:41.318945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.508 [2024-12-16 06:04:41.318960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.508 qpair failed and we were unable to recover it. 00:36:07.508 [2024-12-16 06:04:41.328864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.508 [2024-12-16 06:04:41.328920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.508 [2024-12-16 06:04:41.328934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.508 [2024-12-16 06:04:41.328941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.508 [2024-12-16 06:04:41.328947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.508 [2024-12-16 06:04:41.328963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.508 qpair failed and we were unable to recover it. 00:36:07.508 [2024-12-16 06:04:41.338892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.508 [2024-12-16 06:04:41.338948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.508 [2024-12-16 06:04:41.338961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.508 [2024-12-16 06:04:41.338967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.508 [2024-12-16 06:04:41.338973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.508 [2024-12-16 06:04:41.338987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.508 qpair failed and we were unable to recover it. 00:36:07.508 [2024-12-16 06:04:41.348939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.508 [2024-12-16 06:04:41.348997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.508 [2024-12-16 06:04:41.349010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.508 [2024-12-16 06:04:41.349016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.508 [2024-12-16 06:04:41.349022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.508 [2024-12-16 06:04:41.349036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.508 qpair failed and we were unable to recover it. 00:36:07.508 [2024-12-16 06:04:41.358971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.508 [2024-12-16 06:04:41.359047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.508 [2024-12-16 06:04:41.359060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.508 [2024-12-16 06:04:41.359066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.508 [2024-12-16 06:04:41.359072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.508 [2024-12-16 06:04:41.359089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.508 qpair failed and we were unable to recover it. 00:36:07.769 [2024-12-16 06:04:41.368973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.769 [2024-12-16 06:04:41.369025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.769 [2024-12-16 06:04:41.369037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.769 [2024-12-16 06:04:41.369043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.769 [2024-12-16 06:04:41.369049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.769 [2024-12-16 06:04:41.369063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.769 qpair failed and we were unable to recover it. 00:36:07.769 [2024-12-16 06:04:41.378993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.769 [2024-12-16 06:04:41.379048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.769 [2024-12-16 06:04:41.379060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.769 [2024-12-16 06:04:41.379067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.769 [2024-12-16 06:04:41.379073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.769 [2024-12-16 06:04:41.379086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.769 qpair failed and we were unable to recover it. 00:36:07.769 [2024-12-16 06:04:41.389043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.769 [2024-12-16 06:04:41.389099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.769 [2024-12-16 06:04:41.389113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.769 [2024-12-16 06:04:41.389119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.769 [2024-12-16 06:04:41.389125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.769 [2024-12-16 06:04:41.389139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.769 qpair failed and we were unable to recover it. 00:36:07.769 [2024-12-16 06:04:41.399059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.769 [2024-12-16 06:04:41.399112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.769 [2024-12-16 06:04:41.399125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.769 [2024-12-16 06:04:41.399131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.769 [2024-12-16 06:04:41.399137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.399151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.409083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.409150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.409167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.409173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.409179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.409193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.419125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.419178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.419191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.419197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.419203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.419217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.429211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.429303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.429315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.429321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.429327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.429341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.439177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.439228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.439241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.439247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.439253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.439267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.449202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.449256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.449268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.449275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.449281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.449298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.459244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.459299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.459312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.459318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.459324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.459338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.469272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.469328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.469341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.469347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.469353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.469367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.479296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.479347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.479360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.479366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.479372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.479386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.489332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.489388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.489401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.489407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.489413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.489427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.499327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.499381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.499397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.499403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.499409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.499422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.509378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.509429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.509441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.509447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.509453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.509467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.519408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.519471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.519484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.519490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.519495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.519509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.529486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.529588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.770 [2024-12-16 06:04:41.529603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.770 [2024-12-16 06:04:41.529610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.770 [2024-12-16 06:04:41.529616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.770 [2024-12-16 06:04:41.529630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.770 qpair failed and we were unable to recover it. 00:36:07.770 [2024-12-16 06:04:41.539512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.770 [2024-12-16 06:04:41.539612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.539625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.539631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.539643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.539658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.549497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.549557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.549570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.549576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.549582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.549597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.559565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.559631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.559645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.559651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.559657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.559671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.569558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.569609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.569622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.569628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.569634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.569648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.579519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.579575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.579589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.579595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.579601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.579615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.589606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.589665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.589678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.589685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.589691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.589705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.599657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.599714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.599727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.599733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.599738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.599753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.609653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.609703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.609716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.609723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.609729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.609743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:07.771 [2024-12-16 06:04:41.619687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.771 [2024-12-16 06:04:41.619745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.771 [2024-12-16 06:04:41.619758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.771 [2024-12-16 06:04:41.619765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.771 [2024-12-16 06:04:41.619771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:07.771 [2024-12-16 06:04:41.619785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.771 qpair failed and we were unable to recover it. 00:36:08.031 [2024-12-16 06:04:41.629719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.031 [2024-12-16 06:04:41.629782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.031 [2024-12-16 06:04:41.629795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.031 [2024-12-16 06:04:41.629801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.031 [2024-12-16 06:04:41.629810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.031 [2024-12-16 06:04:41.629826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.031 qpair failed and we were unable to recover it. 00:36:08.031 [2024-12-16 06:04:41.639689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.031 [2024-12-16 06:04:41.639742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.031 [2024-12-16 06:04:41.639755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.031 [2024-12-16 06:04:41.639761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.639767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.639781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.649760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.649813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.649825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.649832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.649837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.649860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.659797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.659898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.659911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.659917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.659923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.659939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.669833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.669895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.669909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.669916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.669922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.669936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.679851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.679938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.679950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.679956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.679962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.679976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.689889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.689968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.689980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.689987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.689993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.690007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.699950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.700009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.700022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.700029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.700035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.700049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.709931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.709986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.709999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.710005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.710012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.710026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.719953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.720010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.720025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.720035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.720041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.720056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.729973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.730029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.730043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.730051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.730057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.730072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.740024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.740087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.740100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.740107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.740113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.740127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.750053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.750108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.750122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.750128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.750134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.750148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.760114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.760164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.032 [2024-12-16 06:04:41.760177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.032 [2024-12-16 06:04:41.760184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.032 [2024-12-16 06:04:41.760190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.032 [2024-12-16 06:04:41.760204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.032 qpair failed and we were unable to recover it. 00:36:08.032 [2024-12-16 06:04:41.770107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.032 [2024-12-16 06:04:41.770160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.770173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.770180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.770185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.770200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.780102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.780193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.780205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.780211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.780217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.780231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.790095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.790154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.790167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.790173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.790179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.790193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.800163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.800215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.800228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.800234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.800240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.800254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.810147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.810199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.810216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.810222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.810228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.810242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.820237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.820293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.820307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.820313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.820319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.820334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.830252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.830307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.830320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.830326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.830332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.830346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.840260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.840353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.840366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.840372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.840378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.840392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.850260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.850317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.850331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.850337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.850343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.850358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.860376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.860432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.860445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.860451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.860457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.860471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.870387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.870442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.870455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.870461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.870467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.870481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.033 [2024-12-16 06:04:41.880426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.033 [2024-12-16 06:04:41.880482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.033 [2024-12-16 06:04:41.880495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.033 [2024-12-16 06:04:41.880501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.033 [2024-12-16 06:04:41.880508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.033 [2024-12-16 06:04:41.880522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.033 qpair failed and we were unable to recover it. 00:36:08.293 [2024-12-16 06:04:41.890365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.293 [2024-12-16 06:04:41.890440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.293 [2024-12-16 06:04:41.890453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.293 [2024-12-16 06:04:41.890459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.293 [2024-12-16 06:04:41.890465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.293 [2024-12-16 06:04:41.890479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.293 qpair failed and we were unable to recover it. 00:36:08.293 [2024-12-16 06:04:41.900473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.293 [2024-12-16 06:04:41.900530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.293 [2024-12-16 06:04:41.900546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.293 [2024-12-16 06:04:41.900553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.293 [2024-12-16 06:04:41.900558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.293 [2024-12-16 06:04:41.900573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.293 qpair failed and we were unable to recover it. 00:36:08.293 [2024-12-16 06:04:41.910501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.293 [2024-12-16 06:04:41.910560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.293 [2024-12-16 06:04:41.910573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.293 [2024-12-16 06:04:41.910579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.293 [2024-12-16 06:04:41.910585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.293 [2024-12-16 06:04:41.910599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.293 qpair failed and we were unable to recover it. 00:36:08.293 [2024-12-16 06:04:41.920451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.293 [2024-12-16 06:04:41.920507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.293 [2024-12-16 06:04:41.920519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.293 [2024-12-16 06:04:41.920526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.293 [2024-12-16 06:04:41.920532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.293 [2024-12-16 06:04:41.920545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.293 qpair failed and we were unable to recover it. 00:36:08.293 [2024-12-16 06:04:41.930553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:41.930636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:41.930650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:41.930656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:41.930662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:41.930676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:41.940549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:41.940650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:41.940663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:41.940669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:41.940675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:41.940693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:41.950600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:41.950655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:41.950668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:41.950674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:41.950680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:41.950695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:41.960650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:41.960728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:41.960741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:41.960747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:41.960753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:41.960768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:41.970658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:41.970758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:41.970771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:41.970777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:41.970783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:41.970798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:41.980679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:41.980733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:41.980746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:41.980753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:41.980758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:41.980773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:41.990698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:41.990756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:41.990772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:41.990778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:41.990784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:41.990798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:42.000741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:42.000797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:42.000810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:42.000817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:42.000823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:42.000837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:42.010761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:42.010821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:42.010834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:42.010840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:42.010851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:42.010866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:42.020812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:42.020899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:42.020912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:42.020919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:42.020925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:42.020940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:42.030826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:42.030902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:42.030916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:42.030922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:42.030931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:42.030945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:42.040884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:42.040943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:42.040956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:42.040963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:42.040968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:42.040982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.294 [2024-12-16 06:04:42.050894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.294 [2024-12-16 06:04:42.050948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.294 [2024-12-16 06:04:42.050962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.294 [2024-12-16 06:04:42.050968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.294 [2024-12-16 06:04:42.050974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.294 [2024-12-16 06:04:42.050989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.294 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.060939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.060996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.061009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.061015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.061021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.061035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.070942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.071001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.071014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.071020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.071026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.071041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.080983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.081045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.081059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.081065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.081071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.081085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.090972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.091029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.091042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.091048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.091054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.091069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.101011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.101068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.101081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.101087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.101093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.101107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.111061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.111144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.111157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.111163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.111169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.111182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.121122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.121194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.121207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.121214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.121223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.121238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.131092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.131142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.131156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.131163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.131169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.131183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.295 [2024-12-16 06:04:42.141091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.295 [2024-12-16 06:04:42.141176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.295 [2024-12-16 06:04:42.141189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.295 [2024-12-16 06:04:42.141196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.295 [2024-12-16 06:04:42.141202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.295 [2024-12-16 06:04:42.141216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.295 qpair failed and we were unable to recover it. 00:36:08.555 [2024-12-16 06:04:42.151165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.555 [2024-12-16 06:04:42.151218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.555 [2024-12-16 06:04:42.151231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.555 [2024-12-16 06:04:42.151238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.555 [2024-12-16 06:04:42.151244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.555 [2024-12-16 06:04:42.151257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-12-16 06:04:42.161208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.555 [2024-12-16 06:04:42.161271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.555 [2024-12-16 06:04:42.161283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.555 [2024-12-16 06:04:42.161289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.555 [2024-12-16 06:04:42.161295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.555 [2024-12-16 06:04:42.161309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-12-16 06:04:42.171161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.555 [2024-12-16 06:04:42.171243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.555 [2024-12-16 06:04:42.171256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.555 [2024-12-16 06:04:42.171262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.555 [2024-12-16 06:04:42.171268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.555 [2024-12-16 06:04:42.171282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-12-16 06:04:42.181192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.555 [2024-12-16 06:04:42.181269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.555 [2024-12-16 06:04:42.181281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.555 [2024-12-16 06:04:42.181288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.555 [2024-12-16 06:04:42.181293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.555 [2024-12-16 06:04:42.181307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.555 qpair failed and we were unable to recover it. 00:36:08.555 [2024-12-16 06:04:42.191272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.555 [2024-12-16 06:04:42.191327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.555 [2024-12-16 06:04:42.191340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.555 [2024-12-16 06:04:42.191346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.191352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.191365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.201280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.201373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.201386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.201392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.201398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.201412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.211277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.211357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.211370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.211379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.211385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.211399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.221360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.221418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.221431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.221437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.221443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.221457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.231374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.231429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.231443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.231449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.231455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.231469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.241380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.241435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.241447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.241454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.241459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.241473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.251401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.251476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.251489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.251495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.251501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.251516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.261526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.261608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.261621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.261628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.261634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.261647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.271479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.271535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.556 [2024-12-16 06:04:42.271548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.556 [2024-12-16 06:04:42.271554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.556 [2024-12-16 06:04:42.271560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.556 [2024-12-16 06:04:42.271574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.556 qpair failed and we were unable to recover it. 00:36:08.556 [2024-12-16 06:04:42.281515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.556 [2024-12-16 06:04:42.281570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.281584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.281590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.281596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.281610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.291523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.291622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.291635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.291641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.291646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.291661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.301592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.301653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.301666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.301677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.301684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.301697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.311599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.311659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.311673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.311680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.311686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.311701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.321633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.321689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.321703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.321710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.321716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.321730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.331649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.331703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.331717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.331723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.331729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.331744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.341724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.341784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.341798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.341804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.341811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.341825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.351786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.351851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.351865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.351871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.351877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.351892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.361755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.361807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.361821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.361827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.361833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.361850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.371823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.371890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.371903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.371909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.371915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.371929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.381803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.381859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.381872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.381879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.381884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.381899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.391786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.391841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.391861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.391867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.391873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.391888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.557 [2024-12-16 06:04:42.401867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.557 [2024-12-16 06:04:42.401921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.557 [2024-12-16 06:04:42.401933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.557 [2024-12-16 06:04:42.401940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.557 [2024-12-16 06:04:42.401946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.557 [2024-12-16 06:04:42.401960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.557 qpair failed and we were unable to recover it. 00:36:08.817 [2024-12-16 06:04:42.411860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.817 [2024-12-16 06:04:42.411944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.817 [2024-12-16 06:04:42.411957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.817 [2024-12-16 06:04:42.411963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.817 [2024-12-16 06:04:42.411969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.817 [2024-12-16 06:04:42.411983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.421939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.421999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.422011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.422018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.422024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.422038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.431958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.432015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.432028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.432034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.432040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.432059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.441981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.442039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.442052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.442058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.442064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.442078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.452023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.452090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.452102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.452108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.452114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.452128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.462021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.462077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.462089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.462096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.462101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.462115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.472113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.472166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.472179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.472185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.472191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.472205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.482102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.482189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.482204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.482211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.482216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.482231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.492151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.492218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.492230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.492236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.492242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.492256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.502158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.502216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.502228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.502234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.502240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.502253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.512204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.512262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.512275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.512281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.512287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.512301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.522215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.522296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.522309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.522315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.522321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.522337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.532274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.532336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.818 [2024-12-16 06:04:42.532349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.818 [2024-12-16 06:04:42.532356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.818 [2024-12-16 06:04:42.532361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.818 [2024-12-16 06:04:42.532376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.818 qpair failed and we were unable to recover it. 00:36:08.818 [2024-12-16 06:04:42.542323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.818 [2024-12-16 06:04:42.542386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.542399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.542405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.542411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.542425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.552302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.552358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.552371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.552377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.552383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.552397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.562341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.562405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.562417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.562423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.562429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.562443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.572386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.572443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.572455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.572461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.572467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.572481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.582356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.582415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.582428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.582435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.582440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.582454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.592452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.592509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.592522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.592528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.592534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.592548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.602429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.602512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.602526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.602532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.602538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.602552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.612434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.612537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.612550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.612557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.612565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.612580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.622498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.622570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.622583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.622589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.622595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.622609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.632533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.632592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.632606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.632612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.632618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.632632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.642539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.642590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.642603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.642609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.642615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.642629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.652561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.652614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.652627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.652634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.652640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.652654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:08.819 [2024-12-16 06:04:42.662559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.819 [2024-12-16 06:04:42.662646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.819 [2024-12-16 06:04:42.662659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.819 [2024-12-16 06:04:42.662665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.819 [2024-12-16 06:04:42.662671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:08.819 [2024-12-16 06:04:42.662685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.819 qpair failed and we were unable to recover it. 00:36:09.079 [2024-12-16 06:04:42.672631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-12-16 06:04:42.672715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-12-16 06:04:42.672730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-12-16 06:04:42.672737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-12-16 06:04:42.672744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.079 [2024-12-16 06:04:42.672759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-12-16 06:04:42.682624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-12-16 06:04:42.682675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-12-16 06:04:42.682688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-12-16 06:04:42.682695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-12-16 06:04:42.682701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.079 [2024-12-16 06:04:42.682715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.079 [2024-12-16 06:04:42.692680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.079 [2024-12-16 06:04:42.692761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.079 [2024-12-16 06:04:42.692775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.079 [2024-12-16 06:04:42.692781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.079 [2024-12-16 06:04:42.692787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.079 [2024-12-16 06:04:42.692801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.079 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.702717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.702777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.702789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.702799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.702805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.702819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.712739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.712791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.712804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.712811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.712817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.712831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.722764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.722816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.722830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.722837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.722843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.722862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.732823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.732908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.732922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.732928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.732934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.732949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.742852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.742915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.742929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.742935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.742941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.742955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.752866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.752945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.752958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.752965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.752971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.752985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.762880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.762934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.762947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.762953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.762959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.762973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.772860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.772914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.772928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.772934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.772940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.772954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.782921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.782980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.782993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.783000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.783006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.783020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.792964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.793026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.793039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.793049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.793055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.793070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.803009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.803074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.803087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.803094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.803100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.803114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.813051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.813115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.813127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.813134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.813140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.813154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.823105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.080 [2024-12-16 06:04:42.823165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.080 [2024-12-16 06:04:42.823179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.080 [2024-12-16 06:04:42.823186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.080 [2024-12-16 06:04:42.823192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.080 [2024-12-16 06:04:42.823206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.080 qpair failed and we were unable to recover it. 00:36:09.080 [2024-12-16 06:04:42.833078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.833134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.833146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.833152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.833158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.833173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.843154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.843218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.843232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.843238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.843244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.843258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.853127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.853215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.853228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.853234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.853240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.853254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.863169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.863224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.863237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.863243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.863249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.863263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.873193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.873253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.873265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.873272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.873278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.873291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.883221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.883274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.883289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.883296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.883302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.883316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.893248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.893302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.893315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.893321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.893327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.893341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.903295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.903393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.903405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.903411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.903417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.903431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.913238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.913294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.913307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.913313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.913319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.913333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.923320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.923392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.923405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.923411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.923416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.923433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.081 [2024-12-16 06:04:42.933413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.081 [2024-12-16 06:04:42.933516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.081 [2024-12-16 06:04:42.933529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.081 [2024-12-16 06:04:42.933535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.081 [2024-12-16 06:04:42.933542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.081 [2024-12-16 06:04:42.933556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.081 qpair failed and we were unable to recover it. 00:36:09.341 [2024-12-16 06:04:42.943403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.341 [2024-12-16 06:04:42.943478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.341 [2024-12-16 06:04:42.943491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.341 [2024-12-16 06:04:42.943497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.341 [2024-12-16 06:04:42.943502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.341 [2024-12-16 06:04:42.943517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.341 qpair failed and we were unable to recover it. 00:36:09.341 [2024-12-16 06:04:42.953416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.341 [2024-12-16 06:04:42.953500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.341 [2024-12-16 06:04:42.953513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.341 [2024-12-16 06:04:42.953519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.341 [2024-12-16 06:04:42.953525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.341 [2024-12-16 06:04:42.953538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.341 qpair failed and we were unable to recover it. 00:36:09.341 [2024-12-16 06:04:42.963471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.341 [2024-12-16 06:04:42.963534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.341 [2024-12-16 06:04:42.963547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.341 [2024-12-16 06:04:42.963554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.341 [2024-12-16 06:04:42.963560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.341 [2024-12-16 06:04:42.963574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.341 qpair failed and we were unable to recover it. 00:36:09.341 [2024-12-16 06:04:42.973489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.341 [2024-12-16 06:04:42.973540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.341 [2024-12-16 06:04:42.973557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.341 [2024-12-16 06:04:42.973564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.341 [2024-12-16 06:04:42.973570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.341 [2024-12-16 06:04:42.973585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:42.983559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:42.983615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:42.983628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:42.983634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:42.983640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:42.983654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:42.993581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:42.993638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:42.993650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:42.993656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:42.993662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:42.993676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.003489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.003557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.003569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.003576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.003581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.003596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.013574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.013626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.013638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.013644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.013650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.013666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.023613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.023673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.023686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.023693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.023699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.023712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.033648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.033704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.033717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.033723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.033729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.033743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.043691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.043745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.043757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.043763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.043769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.043784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.053704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.053759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.053772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.053779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.053785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.053800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.063739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.063796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.063815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.063822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.063828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.063842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.073767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.073823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.073836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.073843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.073854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.073869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.083793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.083845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.083862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.083868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.083874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.083889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.093817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.093872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.093886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.093892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.093898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.093912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.103841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.103907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.342 [2024-12-16 06:04:43.103930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.342 [2024-12-16 06:04:43.103937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.342 [2024-12-16 06:04:43.103946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.342 [2024-12-16 06:04:43.103965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.342 qpair failed and we were unable to recover it. 00:36:09.342 [2024-12-16 06:04:43.113890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.342 [2024-12-16 06:04:43.113946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.113960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.113966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.113972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.113986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.123907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.123964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.123977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.123983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.123989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.124003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.133924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.133984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.133998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.134005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.134011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.134025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.143993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.144097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.144109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.144116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.144122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.144136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.154006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.154075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.154089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.154095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.154101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.154116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.164023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.164079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.164092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.164099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.164105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.164120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.174014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.174070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.174083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.174089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.174095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.174109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.184060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.184119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.184132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.184139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.184144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.184159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.343 [2024-12-16 06:04:43.194114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.343 [2024-12-16 06:04:43.194167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.343 [2024-12-16 06:04:43.194180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.343 [2024-12-16 06:04:43.194186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.343 [2024-12-16 06:04:43.194195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.343 [2024-12-16 06:04:43.194210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.343 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.204139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.204199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.204212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.204218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.204224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.204238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.214164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.214217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.214230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.214236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.214243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.214257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.224174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.224235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.224248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.224254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.224260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.224274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.234193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.234276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.234289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.234295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.234302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.234316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.244228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.244281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.244294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.244300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.244306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.244321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.254299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.254385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.254400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.254407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.254412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.254427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.264279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.264335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.264348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.264354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.264361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.264375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.274343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.603 [2024-12-16 06:04:43.274411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.603 [2024-12-16 06:04:43.274425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.603 [2024-12-16 06:04:43.274431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.603 [2024-12-16 06:04:43.274437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.603 [2024-12-16 06:04:43.274451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.603 qpair failed and we were unable to recover it. 00:36:09.603 [2024-12-16 06:04:43.284365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.284437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.284450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.284460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.284466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.284481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.294301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.294357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.294370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.294376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.294382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.294396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.304358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.304443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.304457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.304463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.304468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.304482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.314446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.314500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.314513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.314520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.314525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.314540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.324417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.324502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.324515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.324521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.324527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.324541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.334423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.334478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.334492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.334498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.334504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.334519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.344457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.344514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.344527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.344533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.344539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.344554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.354571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.354627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.354640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.354646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.354652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.354666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.364596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.364649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.364662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.364668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.364674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.364688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.374548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.374602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.374619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.374626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.374631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.374645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.384647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.384701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.384714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.384720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.384726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.384741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.394599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.394680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.394694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.394700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.394706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.394720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.404682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.404735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.404748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.404754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.604 [2024-12-16 06:04:43.404760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.604 [2024-12-16 06:04:43.404775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.604 qpair failed and we were unable to recover it. 00:36:09.604 [2024-12-16 06:04:43.414740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.604 [2024-12-16 06:04:43.414821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.604 [2024-12-16 06:04:43.414835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.604 [2024-12-16 06:04:43.414841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.605 [2024-12-16 06:04:43.414851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.605 [2024-12-16 06:04:43.414865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.605 qpair failed and we were unable to recover it. 00:36:09.605 [2024-12-16 06:04:43.424813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.605 [2024-12-16 06:04:43.424901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.605 [2024-12-16 06:04:43.424914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.605 [2024-12-16 06:04:43.424921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.605 [2024-12-16 06:04:43.424927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.605 [2024-12-16 06:04:43.424941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.605 qpair failed and we were unable to recover it. 00:36:09.605 [2024-12-16 06:04:43.434807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.605 [2024-12-16 06:04:43.434864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.605 [2024-12-16 06:04:43.434878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.605 [2024-12-16 06:04:43.434884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.605 [2024-12-16 06:04:43.434890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.605 [2024-12-16 06:04:43.434904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.605 qpair failed and we were unable to recover it. 00:36:09.605 [2024-12-16 06:04:43.444818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.605 [2024-12-16 06:04:43.444878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.605 [2024-12-16 06:04:43.444891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.605 [2024-12-16 06:04:43.444899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.605 [2024-12-16 06:04:43.444905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.605 [2024-12-16 06:04:43.444919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.605 qpair failed and we were unable to recover it. 00:36:09.605 [2024-12-16 06:04:43.454840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.605 [2024-12-16 06:04:43.454905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.605 [2024-12-16 06:04:43.454918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.605 [2024-12-16 06:04:43.454924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.605 [2024-12-16 06:04:43.454930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.605 [2024-12-16 06:04:43.454945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.605 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.464871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.464930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.464946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.464953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.464959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.464973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.474954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.475018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.475032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.475038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.475044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.475059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.484895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.484944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.484957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.484963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.484969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.484984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.494982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.495065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.495078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.495085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.495091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.495105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.504989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.505048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.505061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.505067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.505073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.505090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.514988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.515050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.515064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.515070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.515076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.515090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.524958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.525024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.525037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.525043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.525049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.525064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.535075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.535157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.535170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.535177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.535183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.535197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.545057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.545143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.545156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.545162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.545168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.545183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.555181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.555234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.555251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.865 [2024-12-16 06:04:43.555257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.865 [2024-12-16 06:04:43.555263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.865 [2024-12-16 06:04:43.555278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.865 qpair failed and we were unable to recover it. 00:36:09.865 [2024-12-16 06:04:43.565123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.865 [2024-12-16 06:04:43.565223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.865 [2024-12-16 06:04:43.565236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.565242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.565248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.565263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.575083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.575139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.575153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.575159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.575165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.575179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.585186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.585240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.585253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.585259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.585265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.585279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.595162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.595220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.595233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.595239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.595248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.595261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.605199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.605253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.605266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.605273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.605279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.605293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.615282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.615335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.615348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.615355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.615361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.615376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.625341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.625399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.625413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.625419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.625425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.625439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.635352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.635424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.635438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.635444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.635450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.635464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.645335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.645395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.645408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.645415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.645421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.645435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.655382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.655433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.655446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.655453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.655459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.655473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.665383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.665462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.665475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.665481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.665487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.665501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.675490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.675554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.675569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.675575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.675581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.675596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.685460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.685513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.866 [2024-12-16 06:04:43.685526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.866 [2024-12-16 06:04:43.685532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.866 [2024-12-16 06:04:43.685541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.866 [2024-12-16 06:04:43.685555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.866 qpair failed and we were unable to recover it. 00:36:09.866 [2024-12-16 06:04:43.695498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.866 [2024-12-16 06:04:43.695549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.867 [2024-12-16 06:04:43.695563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.867 [2024-12-16 06:04:43.695569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.867 [2024-12-16 06:04:43.695575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.867 [2024-12-16 06:04:43.695589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.867 qpair failed and we were unable to recover it. 00:36:09.867 [2024-12-16 06:04:43.705540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.867 [2024-12-16 06:04:43.705600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.867 [2024-12-16 06:04:43.705613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.867 [2024-12-16 06:04:43.705619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.867 [2024-12-16 06:04:43.705626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.867 [2024-12-16 06:04:43.705639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.867 qpair failed and we were unable to recover it. 00:36:09.867 [2024-12-16 06:04:43.715602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.867 [2024-12-16 06:04:43.715665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.867 [2024-12-16 06:04:43.715679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.867 [2024-12-16 06:04:43.715685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.867 [2024-12-16 06:04:43.715691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:09.867 [2024-12-16 06:04:43.715705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.867 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.725575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.725624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.725639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.725646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.725652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.725667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.735606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.735660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.735674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.735680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.735686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.735701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.745651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.745718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.745732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.745738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.745744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.745758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.755668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.755724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.755737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.755743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.755749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.755763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.765620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.765675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.765688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.765694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.765700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.765714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.775734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.775785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.775798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.775807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.775813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.775827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.785756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.785819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.785833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.785839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.785845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.785864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.795778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.795835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.795851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.795858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.795863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.795877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.805836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.805898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.805912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.805918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.805923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.805938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.815866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.815920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.815932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.815939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.815945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.815959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.825881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.825942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.825956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.825962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.825968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.825982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.127 qpair failed and we were unable to recover it. 00:36:10.127 [2024-12-16 06:04:43.835899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.127 [2024-12-16 06:04:43.835956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.127 [2024-12-16 06:04:43.835969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.127 [2024-12-16 06:04:43.835976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.127 [2024-12-16 06:04:43.835982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.127 [2024-12-16 06:04:43.835996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.845945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.846010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.846023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.846029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.846035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.846050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.855945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.855998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.856012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.856018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.856024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.856039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.866054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.866110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.866124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.866133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.866139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.866154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.876027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.876086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.876099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.876106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.876112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.876127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.886091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.886161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.886174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.886180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.886186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.886200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.896063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.896121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.896134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.896140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.896146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.896160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.906105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.906162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.906175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.906181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.906187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.906201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.916140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.916196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.916209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.916216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.916222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.916236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.926159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.926214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.926228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.926235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.926241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.926256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.936181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.936234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.936247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.936253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.936259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.936273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.946235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.946291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.946304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.946310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.946316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.946331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.956272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.956326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.956342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.956349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.956354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.956369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.966272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.966322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.966336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.128 [2024-12-16 06:04:43.966342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.128 [2024-12-16 06:04:43.966348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.128 [2024-12-16 06:04:43.966362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.128 qpair failed and we were unable to recover it. 00:36:10.128 [2024-12-16 06:04:43.976324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.128 [2024-12-16 06:04:43.976376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.128 [2024-12-16 06:04:43.976389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.129 [2024-12-16 06:04:43.976395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.129 [2024-12-16 06:04:43.976401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.129 [2024-12-16 06:04:43.976415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.129 qpair failed and we were unable to recover it. 00:36:10.388 [2024-12-16 06:04:43.986341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.388 [2024-12-16 06:04:43.986399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.388 [2024-12-16 06:04:43.986412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.388 [2024-12-16 06:04:43.986418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.388 [2024-12-16 06:04:43.986424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.388 [2024-12-16 06:04:43.986437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.388 qpair failed and we were unable to recover it. 00:36:10.388 [2024-12-16 06:04:43.996385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.388 [2024-12-16 06:04:43.996453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.388 [2024-12-16 06:04:43.996466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.388 [2024-12-16 06:04:43.996472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.388 [2024-12-16 06:04:43.996478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.388 [2024-12-16 06:04:43.996494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.388 qpair failed and we were unable to recover it. 00:36:10.388 [2024-12-16 06:04:44.006417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.006478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.006491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.006498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.006504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.006518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.016441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.016493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.016505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.016512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.016518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.016532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.026466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.026524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.026537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.026543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.026549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.026563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.036482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.036537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.036550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.036556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.036562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.036576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.046512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.046579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.046596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.046602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.046608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.046622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.056529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.056582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.056596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.056603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.056609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.056623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.066564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.066622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.066635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.066641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.066648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.066661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.076631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.076689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.076702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.076709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.076715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.076729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.086664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.086718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.086732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.086738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.086744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.086762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.096696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.096754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.096767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.096774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.096780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.096794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.106683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.106749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.106762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.106769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.106775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.106789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.116704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.116759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.116773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.116779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.116785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.116799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.389 [2024-12-16 06:04:44.126746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.389 [2024-12-16 06:04:44.126832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.389 [2024-12-16 06:04:44.126849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.389 [2024-12-16 06:04:44.126856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.389 [2024-12-16 06:04:44.126862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.389 [2024-12-16 06:04:44.126877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.389 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.136771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.136829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.136843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.136853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.136859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.136874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.146790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.146873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.146886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.146892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.146898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.146914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.156829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.156891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.156905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.156912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.156917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.156932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.166844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.166902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.166915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.166921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.166927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.166941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.176873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.176929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.176942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.176948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.176957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.176971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.186916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.186972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.186985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.186992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.186998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.187012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.196900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.196954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.196967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.196973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.196979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.196994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.206957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.207034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.207047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.207054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.207060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.207074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.216987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.217042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.217055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.217062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.217068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.217082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.227034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.227094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.227107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.227113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.227119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.227133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-12-16 06:04:44.237052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.390 [2024-12-16 06:04:44.237107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.390 [2024-12-16 06:04:44.237119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.390 [2024-12-16 06:04:44.237126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.390 [2024-12-16 06:04:44.237132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.390 [2024-12-16 06:04:44.237146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.650 [2024-12-16 06:04:44.247074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.650 [2024-12-16 06:04:44.247131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.650 [2024-12-16 06:04:44.247145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.650 [2024-12-16 06:04:44.247150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.650 [2024-12-16 06:04:44.247156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.650 [2024-12-16 06:04:44.247170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.650 qpair failed and we were unable to recover it. 00:36:10.650 [2024-12-16 06:04:44.257143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.650 [2024-12-16 06:04:44.257198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.650 [2024-12-16 06:04:44.257211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.650 [2024-12-16 06:04:44.257218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.650 [2024-12-16 06:04:44.257224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.650 [2024-12-16 06:04:44.257237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.650 qpair failed and we were unable to recover it. 00:36:10.650 [2024-12-16 06:04:44.267073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.650 [2024-12-16 06:04:44.267127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.650 [2024-12-16 06:04:44.267140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.650 [2024-12-16 06:04:44.267149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.650 [2024-12-16 06:04:44.267155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.267169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.277200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.277255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.277268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.277274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.277280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.277294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.287195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.287245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.287258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.287264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.287269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.287283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.297258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.297310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.297323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.297329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.297335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.297349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.307219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.307293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.307306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.307312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.307318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.307332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.317282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.317340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.317353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.317359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.317365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.317379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.327301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.327363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.327377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.327384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.327390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.327404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.337269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.337323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.337336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.337342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.337348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.337363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.347345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.347436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.347449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.347456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.347462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.347476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.357399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.357454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.357467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.357477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.357483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.357497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.367427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.367482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.367496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.367502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.367508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.367522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.377474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.377540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.377553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.377559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.377565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.377579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.387499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.387558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.387571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.387577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.387583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.387597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.397515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.651 [2024-12-16 06:04:44.397568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.651 [2024-12-16 06:04:44.397581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.651 [2024-12-16 06:04:44.397587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.651 [2024-12-16 06:04:44.397593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.651 [2024-12-16 06:04:44.397607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.651 qpair failed and we were unable to recover it. 00:36:10.651 [2024-12-16 06:04:44.407545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.407598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.407611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.407617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.407623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.407637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.417565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.417613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.417627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.417634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.417640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.417655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.427613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.427671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.427685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.427691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.427697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.427712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.437627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.437697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.437710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.437717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.437722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.437737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.447662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.447716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.447733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.447740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.447746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.447761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.457696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.457750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.457764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.457771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.457777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.457791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.467817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.467877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.467891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.467898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.467903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.467918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.477687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.477745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.477757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.477763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.477769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.477783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.487789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.487843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.487860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.487866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.487872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.487889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.652 [2024-12-16 06:04:44.497822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.652 [2024-12-16 06:04:44.497876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.652 [2024-12-16 06:04:44.497890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.652 [2024-12-16 06:04:44.497897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.652 [2024-12-16 06:04:44.497902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.652 [2024-12-16 06:04:44.497917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.652 qpair failed and we were unable to recover it. 00:36:10.912 [2024-12-16 06:04:44.507854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.912 [2024-12-16 06:04:44.507952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.912 [2024-12-16 06:04:44.507965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.912 [2024-12-16 06:04:44.507971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.912 [2024-12-16 06:04:44.507977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.912 [2024-12-16 06:04:44.507992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.912 qpair failed and we were unable to recover it. 00:36:10.912 [2024-12-16 06:04:44.517877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.912 [2024-12-16 06:04:44.517937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.912 [2024-12-16 06:04:44.517950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.912 [2024-12-16 06:04:44.517956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.912 [2024-12-16 06:04:44.517962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.912 [2024-12-16 06:04:44.517976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.912 qpair failed and we were unable to recover it. 00:36:10.912 [2024-12-16 06:04:44.527901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.912 [2024-12-16 06:04:44.527959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.912 [2024-12-16 06:04:44.527973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.912 [2024-12-16 06:04:44.527979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.912 [2024-12-16 06:04:44.527985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.912 [2024-12-16 06:04:44.528000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.912 qpair failed and we were unable to recover it. 00:36:10.912 [2024-12-16 06:04:44.537928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.912 [2024-12-16 06:04:44.538002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.912 [2024-12-16 06:04:44.538018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.912 [2024-12-16 06:04:44.538025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.912 [2024-12-16 06:04:44.538031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.912 [2024-12-16 06:04:44.538045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.912 qpair failed and we were unable to recover it. 00:36:10.912 [2024-12-16 06:04:44.547998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.912 [2024-12-16 06:04:44.548057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.912 [2024-12-16 06:04:44.548071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.912 [2024-12-16 06:04:44.548078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.912 [2024-12-16 06:04:44.548083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.912 [2024-12-16 06:04:44.548098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.912 qpair failed and we were unable to recover it. 00:36:10.912 [2024-12-16 06:04:44.557988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.912 [2024-12-16 06:04:44.558043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.912 [2024-12-16 06:04:44.558056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.912 [2024-12-16 06:04:44.558062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.912 [2024-12-16 06:04:44.558068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.912 [2024-12-16 06:04:44.558083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.912 qpair failed and we were unable to recover it. 00:36:10.912 [2024-12-16 06:04:44.568045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.568112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.568125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.568131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.568137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.568151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.578044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.578100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.578113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.578119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.578125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.578145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.588106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.588168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.588181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.588187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.588194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.588208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.598130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.598212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.598225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.598231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.598237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.598251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.608133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.608183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.608196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.608202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.608209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.608222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.618176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.618231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.618244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.618250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.618256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.618270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.628239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.628302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.628318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.628324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.628330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.628344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.638240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.638298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.638311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.638317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.638323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.638337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.648255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.648311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.648324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.648331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.648339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.648353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.658236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.658294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.658307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.658313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.658319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.658333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.668354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.668407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.668420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.668426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.668435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.668449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.678336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.678393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.678408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.678414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.678420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.913 [2024-12-16 06:04:44.678434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.913 qpair failed and we were unable to recover it. 00:36:10.913 [2024-12-16 06:04:44.688377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.913 [2024-12-16 06:04:44.688429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.913 [2024-12-16 06:04:44.688442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.913 [2024-12-16 06:04:44.688449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.913 [2024-12-16 06:04:44.688454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.688468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:10.914 [2024-12-16 06:04:44.698342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.914 [2024-12-16 06:04:44.698391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.914 [2024-12-16 06:04:44.698404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.914 [2024-12-16 06:04:44.698410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.914 [2024-12-16 06:04:44.698416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.698430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:10.914 [2024-12-16 06:04:44.708445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.914 [2024-12-16 06:04:44.708523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.914 [2024-12-16 06:04:44.708536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.914 [2024-12-16 06:04:44.708543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.914 [2024-12-16 06:04:44.708549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.708563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:10.914 [2024-12-16 06:04:44.718506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.914 [2024-12-16 06:04:44.718565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.914 [2024-12-16 06:04:44.718579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.914 [2024-12-16 06:04:44.718585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.914 [2024-12-16 06:04:44.718591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.718606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:10.914 [2024-12-16 06:04:44.728495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.914 [2024-12-16 06:04:44.728550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.914 [2024-12-16 06:04:44.728565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.914 [2024-12-16 06:04:44.728572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.914 [2024-12-16 06:04:44.728578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.728593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:10.914 [2024-12-16 06:04:44.738550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.914 [2024-12-16 06:04:44.738602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.914 [2024-12-16 06:04:44.738615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.914 [2024-12-16 06:04:44.738621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.914 [2024-12-16 06:04:44.738628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.738642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:10.914 [2024-12-16 06:04:44.748536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.914 [2024-12-16 06:04:44.748593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.914 [2024-12-16 06:04:44.748606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.914 [2024-12-16 06:04:44.748612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.914 [2024-12-16 06:04:44.748618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.748632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:10.914 [2024-12-16 06:04:44.758591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.914 [2024-12-16 06:04:44.758647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.914 [2024-12-16 06:04:44.758660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.914 [2024-12-16 06:04:44.758666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.914 [2024-12-16 06:04:44.758675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:10.914 [2024-12-16 06:04:44.758689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.914 qpair failed and we were unable to recover it. 00:36:11.174 [2024-12-16 06:04:44.768602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.174 [2024-12-16 06:04:44.768655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.174 [2024-12-16 06:04:44.768668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.174 [2024-12-16 06:04:44.768675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.174 [2024-12-16 06:04:44.768681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.174 [2024-12-16 06:04:44.768695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.174 qpair failed and we were unable to recover it. 00:36:11.174 [2024-12-16 06:04:44.778667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.174 [2024-12-16 06:04:44.778718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.174 [2024-12-16 06:04:44.778731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.174 [2024-12-16 06:04:44.778737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.174 [2024-12-16 06:04:44.778743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.174 [2024-12-16 06:04:44.778758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.174 qpair failed and we were unable to recover it. 00:36:11.174 [2024-12-16 06:04:44.788613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.174 [2024-12-16 06:04:44.788667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.174 [2024-12-16 06:04:44.788681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.174 [2024-12-16 06:04:44.788687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.174 [2024-12-16 06:04:44.788693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.174 [2024-12-16 06:04:44.788707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.174 qpair failed and we were unable to recover it. 00:36:11.174 [2024-12-16 06:04:44.798743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.174 [2024-12-16 06:04:44.798800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.174 [2024-12-16 06:04:44.798813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.174 [2024-12-16 06:04:44.798819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.174 [2024-12-16 06:04:44.798825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.174 [2024-12-16 06:04:44.798839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.174 qpair failed and we were unable to recover it. 00:36:11.174 [2024-12-16 06:04:44.808696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.174 [2024-12-16 06:04:44.808755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.174 [2024-12-16 06:04:44.808770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.174 [2024-12-16 06:04:44.808776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.174 [2024-12-16 06:04:44.808783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.174 [2024-12-16 06:04:44.808797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.174 qpair failed and we were unable to recover it. 00:36:11.174 [2024-12-16 06:04:44.818742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.174 [2024-12-16 06:04:44.818797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.174 [2024-12-16 06:04:44.818810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.174 [2024-12-16 06:04:44.818816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.174 [2024-12-16 06:04:44.818822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.818837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.828781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.828838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.828856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.828862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.828868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.828882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.838802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.838860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.838873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.838879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.838885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.838899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.848806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.848864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.848878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.848887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.848893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.848908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.858786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.858841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.858859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.858866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.858872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.858886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.868915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.868979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.868992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.868998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.869004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.869019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.878988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.879082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.879095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.879101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.879107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.879122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.888944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.888996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.889009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.889015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.889022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.889036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.898997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.899045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.899058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.899064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.899070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.899085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.908994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.909049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.909062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.909068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.909074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.909087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.918982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.919040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.919054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.919060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.919066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.919080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.929064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.929116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.929130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.929136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.929143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.929157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.939062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.939116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.939133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.939140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.939146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.939161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.175 qpair failed and we were unable to recover it. 00:36:11.175 [2024-12-16 06:04:44.949059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.175 [2024-12-16 06:04:44.949116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.175 [2024-12-16 06:04:44.949130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.175 [2024-12-16 06:04:44.949137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.175 [2024-12-16 06:04:44.949143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.175 [2024-12-16 06:04:44.949157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.176 [2024-12-16 06:04:44.959107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.176 [2024-12-16 06:04:44.959167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.176 [2024-12-16 06:04:44.959180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.176 [2024-12-16 06:04:44.959186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.176 [2024-12-16 06:04:44.959192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.176 [2024-12-16 06:04:44.959207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.176 [2024-12-16 06:04:44.969134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.176 [2024-12-16 06:04:44.969185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.176 [2024-12-16 06:04:44.969197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.176 [2024-12-16 06:04:44.969204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.176 [2024-12-16 06:04:44.969210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.176 [2024-12-16 06:04:44.969224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.176 [2024-12-16 06:04:44.979190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.176 [2024-12-16 06:04:44.979246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.176 [2024-12-16 06:04:44.979259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.176 [2024-12-16 06:04:44.979267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.176 [2024-12-16 06:04:44.979273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.176 [2024-12-16 06:04:44.979288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.176 [2024-12-16 06:04:44.989302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.176 [2024-12-16 06:04:44.989399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.176 [2024-12-16 06:04:44.989412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.176 [2024-12-16 06:04:44.989419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.176 [2024-12-16 06:04:44.989425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.176 [2024-12-16 06:04:44.989439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.176 [2024-12-16 06:04:44.999232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.176 [2024-12-16 06:04:44.999290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.176 [2024-12-16 06:04:44.999303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.176 [2024-12-16 06:04:44.999309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.176 [2024-12-16 06:04:44.999315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.176 [2024-12-16 06:04:44.999329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.176 [2024-12-16 06:04:45.009287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.176 [2024-12-16 06:04:45.009347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.176 [2024-12-16 06:04:45.009360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.176 [2024-12-16 06:04:45.009367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.176 [2024-12-16 06:04:45.009373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.176 [2024-12-16 06:04:45.009387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.176 [2024-12-16 06:04:45.019340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.176 [2024-12-16 06:04:45.019394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.176 [2024-12-16 06:04:45.019407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.176 [2024-12-16 06:04:45.019414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.176 [2024-12-16 06:04:45.019420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.176 [2024-12-16 06:04:45.019434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.176 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.029321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.029378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.029394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.029401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.029406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.029421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.039372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.039427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.039440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.039446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.039452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.039466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.049366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.049426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.049439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.049446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.049452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.049465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.059461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.059516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.059530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.059537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.059543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.059559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.069482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.069544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.069557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.069563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.069569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.069587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.079479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.079532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.079545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.079552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.079558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.079572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.089561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.089615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.089628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.089634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.089640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.089655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.099565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.099642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.099655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.099662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.099668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.099682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.109580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.109661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.109674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.109680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.109686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.109700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.119542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.119604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.119622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.119628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.119634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.439 [2024-12-16 06:04:45.119649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.439 qpair failed and we were unable to recover it. 00:36:11.439 [2024-12-16 06:04:45.129636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.439 [2024-12-16 06:04:45.129693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.439 [2024-12-16 06:04:45.129707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.439 [2024-12-16 06:04:45.129713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.439 [2024-12-16 06:04:45.129719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.129734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.139591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.139642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.139657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.139664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.139670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.139685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.149633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.149689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.149703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.149710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.149716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.149730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.159727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.159782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.159795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.159802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.159811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.159826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.169748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.169800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.169814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.169820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.169826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.169840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.179772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.179858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.179872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.179878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.179884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.179898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.189817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.189877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.189890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.189897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.189903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.189917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.199835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.199895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.199908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.199914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.199920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.199934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.209870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.209929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.209942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.209948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.209954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.209968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.219852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.219908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.219920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.219926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.219932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.219947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.229932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.229987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.230000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.230006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.230012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.230026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.239944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.239998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.240011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.240018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.240024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.240038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.249972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.250023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.250037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.250043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.250052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.440 [2024-12-16 06:04:45.250067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.440 qpair failed and we were unable to recover it. 00:36:11.440 [2024-12-16 06:04:45.259990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.440 [2024-12-16 06:04:45.260041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.440 [2024-12-16 06:04:45.260053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.440 [2024-12-16 06:04:45.260059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.440 [2024-12-16 06:04:45.260065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.441 [2024-12-16 06:04:45.260080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-16 06:04:45.270084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.441 [2024-12-16 06:04:45.270192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.441 [2024-12-16 06:04:45.270206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.441 [2024-12-16 06:04:45.270212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.441 [2024-12-16 06:04:45.270219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.441 [2024-12-16 06:04:45.270233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-16 06:04:45.280063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.441 [2024-12-16 06:04:45.280118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.441 [2024-12-16 06:04:45.280131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.441 [2024-12-16 06:04:45.280137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.441 [2024-12-16 06:04:45.280143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.441 [2024-12-16 06:04:45.280157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.441 [2024-12-16 06:04:45.290016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.441 [2024-12-16 06:04:45.290079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.441 [2024-12-16 06:04:45.290092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.441 [2024-12-16 06:04:45.290098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.441 [2024-12-16 06:04:45.290104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.441 [2024-12-16 06:04:45.290118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.441 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.300116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.300196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.300209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.300215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.300221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.300236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.310189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.310248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.310263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.310269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.310275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.310289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.320176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.320253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.320266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.320272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.320278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.320292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.330127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.330179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.330193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.330200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.330206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.330221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.340232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.340291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.340304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.340317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.340323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.340337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.350302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.350403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.350417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.350423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.350430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.350444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.360301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.360355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.360368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.360374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.360380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.360394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.370332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.370411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.370424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.370430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.370436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.370450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.380349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.380415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.380428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.380434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.380441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.788 [2024-12-16 06:04:45.380455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.788 qpair failed and we were unable to recover it. 00:36:11.788 [2024-12-16 06:04:45.390375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.788 [2024-12-16 06:04:45.390433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.788 [2024-12-16 06:04:45.390446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.788 [2024-12-16 06:04:45.390453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.788 [2024-12-16 06:04:45.390458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.390472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.400401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.400457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.400470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.400476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.400482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.400496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.410412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.410464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.410477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.410483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.410490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.410504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.420500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.420561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.420574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.420580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.420586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.420600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.430425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.430483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.430496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.430506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.430512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.430526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.440512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.440568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.440581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.440588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.440593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.440607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.450538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.450591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.450604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.450610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.450616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.450630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.460559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.460612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.460624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.460630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.460636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.460650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.470627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.470682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.470696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.470703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.470709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.470723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.480628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.480684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.480697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.480704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.480710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.480724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.490647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.490707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.490721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.490727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.490733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.490747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.500680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.500732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.500745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.500752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.500758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.500772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.510722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.510809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.510822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.510828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.510834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.510851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.520796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.520867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.520883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.789 [2024-12-16 06:04:45.520889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.789 [2024-12-16 06:04:45.520896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.789 [2024-12-16 06:04:45.520910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.789 qpair failed and we were unable to recover it. 00:36:11.789 [2024-12-16 06:04:45.530773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.789 [2024-12-16 06:04:45.530829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.789 [2024-12-16 06:04:45.530843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.530853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.530859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.530874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.540796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.540852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.540866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.540873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.540879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.540893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.550830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.550893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.550907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.550913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.550919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.550934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.560869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.560924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.560937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.560943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.560949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.560967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.570892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.570949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.570962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.570968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.570974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.570989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.580965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.581028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.581041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.581048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.581054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.581068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.590947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.591003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.591016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.591022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.591028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.591042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.600994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.601054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.601068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.601074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.601080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.601095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.611012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.611068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.611084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.611091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.611096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.611110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.621039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.621094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.621107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.621114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.621120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.621134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.631079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.631138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.631151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.631157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.631163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.631177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:11.790 [2024-12-16 06:04:45.641106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.790 [2024-12-16 06:04:45.641164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.790 [2024-12-16 06:04:45.641178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.790 [2024-12-16 06:04:45.641184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.790 [2024-12-16 06:04:45.641190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:11.790 [2024-12-16 06:04:45.641204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.790 qpair failed and we were unable to recover it. 00:36:12.049 [2024-12-16 06:04:45.651115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.049 [2024-12-16 06:04:45.651164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.049 [2024-12-16 06:04:45.651178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.049 [2024-12-16 06:04:45.651184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.049 [2024-12-16 06:04:45.651193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.049 [2024-12-16 06:04:45.651207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.049 qpair failed and we were unable to recover it. 00:36:12.049 [2024-12-16 06:04:45.661149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.049 [2024-12-16 06:04:45.661201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.049 [2024-12-16 06:04:45.661214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.049 [2024-12-16 06:04:45.661219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.049 [2024-12-16 06:04:45.661225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.049 [2024-12-16 06:04:45.661239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.049 qpair failed and we were unable to recover it. 00:36:12.049 [2024-12-16 06:04:45.671175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.049 [2024-12-16 06:04:45.671230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.049 [2024-12-16 06:04:45.671243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.049 [2024-12-16 06:04:45.671249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.049 [2024-12-16 06:04:45.671255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.049 [2024-12-16 06:04:45.671269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.049 qpair failed and we were unable to recover it. 00:36:12.049 [2024-12-16 06:04:45.681204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.049 [2024-12-16 06:04:45.681260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.049 [2024-12-16 06:04:45.681274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.049 [2024-12-16 06:04:45.681280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.049 [2024-12-16 06:04:45.681286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.049 [2024-12-16 06:04:45.681301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.049 qpair failed and we were unable to recover it. 00:36:12.049 [2024-12-16 06:04:45.691225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.049 [2024-12-16 06:04:45.691282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.049 [2024-12-16 06:04:45.691295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.049 [2024-12-16 06:04:45.691302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.049 [2024-12-16 06:04:45.691308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.049 [2024-12-16 06:04:45.691322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.049 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.701283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.701375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.701387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.701394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.701399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.701414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.711297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.711353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.711367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.711373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.711380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.711394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.721317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.721372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.721385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.721391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.721397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.721412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.731353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.731410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.731424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.731430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.731437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.731451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.741406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.741488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.741501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.741507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.741516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.741531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.751450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.751505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.751519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.751525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.751531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.751545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.761445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.761500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.761513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.761519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.761525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.761539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.771485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.771551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.771564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.771571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.771577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.771591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.781477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.781525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.781538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.781544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.781550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.781564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.791531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.791608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.791621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.791628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.791633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.791648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.801589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.801641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.801654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.801660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.801666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.801680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.811572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.811635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.811648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.811654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.811660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.811675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.821606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.050 [2024-12-16 06:04:45.821688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.050 [2024-12-16 06:04:45.821701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.050 [2024-12-16 06:04:45.821707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.050 [2024-12-16 06:04:45.821713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb8000b90 00:36:12.050 [2024-12-16 06:04:45.821727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.050 qpair failed and we were unable to recover it. 00:36:12.050 [2024-12-16 06:04:45.831641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.051 [2024-12-16 06:04:45.831702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.051 [2024-12-16 06:04:45.831722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.051 [2024-12-16 06:04:45.831734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.051 [2024-12-16 06:04:45.831740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb0000b90 00:36:12.051 [2024-12-16 06:04:45.831759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.051 qpair failed and we were unable to recover it. 00:36:12.051 [2024-12-16 06:04:45.841677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.051 [2024-12-16 06:04:45.841741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.051 [2024-12-16 06:04:45.841756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.051 [2024-12-16 06:04:45.841762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.051 [2024-12-16 06:04:45.841768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ffbb0000b90 00:36:12.051 [2024-12-16 06:04:45.841784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.051 qpair failed and we were unable to recover it. 00:36:12.051 [2024-12-16 06:04:45.841866] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:12.051 A controller has encountered a failure and is being reset. 00:36:12.051 Controller properly reset. 00:36:12.051 Initializing NVMe Controllers 00:36:12.051 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:12.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:12.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:12.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:12.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:12.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:12.051 Initialization complete. Launching workers. 00:36:12.051 Starting thread on core 1 00:36:12.051 Starting thread on core 2 00:36:12.051 Starting thread on core 3 00:36:12.051 Starting thread on core 0 00:36:12.051 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:12.051 00:36:12.051 real 0m10.646s 00:36:12.051 user 0m19.185s 00:36:12.051 sys 0m4.423s 00:36:12.051 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.051 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.051 ************************************ 00:36:12.051 END TEST nvmf_target_disconnect_tc2 00:36:12.051 ************************************ 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.309 rmmod nvme_tcp 00:36:12.309 rmmod nvme_fabrics 00:36:12.309 rmmod nvme_keyring 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 3580056 ']' 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 3580056 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3580056 ']' 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3580056 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:12.309 06:04:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3580056 00:36:12.309 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:12.309 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:12.309 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3580056' 00:36:12.309 killing process with pid 3580056 00:36:12.309 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3580056 00:36:12.309 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3580056 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.567 06:04:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.470 06:04:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:14.470 00:36:14.470 real 0m19.006s 00:36:14.470 user 0m46.215s 00:36:14.470 sys 0m8.990s 00:36:14.470 06:04:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:14.470 06:04:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:14.470 ************************************ 00:36:14.470 END TEST nvmf_target_disconnect 00:36:14.470 ************************************ 00:36:14.728 06:04:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:14.728 00:36:14.728 real 7m16.241s 00:36:14.728 user 16m47.416s 00:36:14.728 sys 2m3.567s 00:36:14.728 06:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:14.728 06:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.728 ************************************ 00:36:14.728 END TEST nvmf_host 00:36:14.728 ************************************ 00:36:14.728 06:04:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:14.728 06:04:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:14.728 06:04:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:14.728 06:04:48 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:14.728 06:04:48 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:14.728 06:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.728 ************************************ 00:36:14.728 START TEST nvmf_target_core_interrupt_mode 00:36:14.728 ************************************ 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:14.728 * Looking for test storage... 00:36:14.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:14.728 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.988 --rc genhtml_branch_coverage=1 00:36:14.988 --rc genhtml_function_coverage=1 00:36:14.988 --rc genhtml_legend=1 00:36:14.988 --rc geninfo_all_blocks=1 00:36:14.988 --rc geninfo_unexecuted_blocks=1 00:36:14.988 00:36:14.988 ' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.988 --rc genhtml_branch_coverage=1 00:36:14.988 --rc genhtml_function_coverage=1 00:36:14.988 --rc genhtml_legend=1 00:36:14.988 --rc geninfo_all_blocks=1 00:36:14.988 --rc geninfo_unexecuted_blocks=1 00:36:14.988 00:36:14.988 ' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.988 --rc genhtml_branch_coverage=1 00:36:14.988 --rc genhtml_function_coverage=1 00:36:14.988 --rc genhtml_legend=1 00:36:14.988 --rc geninfo_all_blocks=1 00:36:14.988 --rc geninfo_unexecuted_blocks=1 00:36:14.988 00:36:14.988 ' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.988 --rc genhtml_branch_coverage=1 00:36:14.988 --rc genhtml_function_coverage=1 00:36:14.988 --rc genhtml_legend=1 00:36:14.988 --rc geninfo_all_blocks=1 00:36:14.988 --rc geninfo_unexecuted_blocks=1 00:36:14.988 00:36:14.988 ' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:14.988 ************************************ 00:36:14.988 START TEST nvmf_abort 00:36:14.988 ************************************ 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:14.988 * Looking for test storage... 00:36:14.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:14.988 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.989 --rc genhtml_branch_coverage=1 00:36:14.989 --rc genhtml_function_coverage=1 00:36:14.989 --rc genhtml_legend=1 00:36:14.989 --rc geninfo_all_blocks=1 00:36:14.989 --rc geninfo_unexecuted_blocks=1 00:36:14.989 00:36:14.989 ' 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.989 --rc genhtml_branch_coverage=1 00:36:14.989 --rc genhtml_function_coverage=1 00:36:14.989 --rc genhtml_legend=1 00:36:14.989 --rc geninfo_all_blocks=1 00:36:14.989 --rc geninfo_unexecuted_blocks=1 00:36:14.989 00:36:14.989 ' 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.989 --rc genhtml_branch_coverage=1 00:36:14.989 --rc genhtml_function_coverage=1 00:36:14.989 --rc genhtml_legend=1 00:36:14.989 --rc geninfo_all_blocks=1 00:36:14.989 --rc geninfo_unexecuted_blocks=1 00:36:14.989 00:36:14.989 ' 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:14.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.989 --rc genhtml_branch_coverage=1 00:36:14.989 --rc genhtml_function_coverage=1 00:36:14.989 --rc genhtml_legend=1 00:36:14.989 --rc geninfo_all_blocks=1 00:36:14.989 --rc geninfo_unexecuted_blocks=1 00:36:14.989 00:36:14.989 ' 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.989 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.247 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.247 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.247 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.248 06:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:20.512 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:20.513 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:20.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:20.513 Found net devices under 0000:af:00.0: cvl_0_0 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:20.513 Found net devices under 0000:af:00.1: cvl_0_1 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:20.513 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:20.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:20.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:36:20.772 00:36:20.772 --- 10.0.0.2 ping statistics --- 00:36:20.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.772 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:20.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:20.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:36:20.772 00:36:20.772 --- 10.0.0.1 ping statistics --- 00:36:20.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.772 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3584496 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3584496 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3584496 ']' 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:20.772 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.772 [2024-12-16 06:04:54.470900] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:20.772 [2024-12-16 06:04:54.471838] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:20.772 [2024-12-16 06:04:54.471886] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:20.772 [2024-12-16 06:04:54.532048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:20.772 [2024-12-16 06:04:54.572551] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:20.772 [2024-12-16 06:04:54.572591] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:20.772 [2024-12-16 06:04:54.572597] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:20.772 [2024-12-16 06:04:54.572603] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:20.772 [2024-12-16 06:04:54.572608] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:20.772 [2024-12-16 06:04:54.572729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:20.772 [2024-12-16 06:04:54.572751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:20.772 [2024-12-16 06:04:54.572751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.030 [2024-12-16 06:04:54.646095] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:21.030 [2024-12-16 06:04:54.646206] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:21.030 [2024-12-16 06:04:54.646355] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:21.030 [2024-12-16 06:04:54.646473] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 [2024-12-16 06:04:54.709294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 Malloc0 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 Delay0 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 [2024-12-16 06:04:54.773418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.030 06:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:21.289 [2024-12-16 06:04:54.923035] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:23.188 Initializing NVMe Controllers 00:36:23.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:23.188 controller IO queue size 128 less than required 00:36:23.188 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:23.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:23.188 Initialization complete. Launching workers. 00:36:23.188 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 38155 00:36:23.188 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38216, failed to submit 66 00:36:23.188 success 38155, unsuccessful 61, failed 0 00:36:23.188 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:23.188 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.189 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:23.189 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.189 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:23.189 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:23.189 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:23.189 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:23.447 rmmod nvme_tcp 00:36:23.447 rmmod nvme_fabrics 00:36:23.447 rmmod nvme_keyring 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3584496 ']' 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3584496 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3584496 ']' 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3584496 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3584496 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3584496' 00:36:23.447 killing process with pid 3584496 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3584496 00:36:23.447 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3584496 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:23.705 06:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.609 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:25.609 00:36:25.609 real 0m10.788s 00:36:25.609 user 0m10.357s 00:36:25.609 sys 0m5.396s 00:36:25.609 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:25.609 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.609 ************************************ 00:36:25.609 END TEST nvmf_abort 00:36:25.609 ************************************ 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:25.868 ************************************ 00:36:25.868 START TEST nvmf_ns_hotplug_stress 00:36:25.868 ************************************ 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:25.868 * Looking for test storage... 00:36:25.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:25.868 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:25.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.869 --rc genhtml_branch_coverage=1 00:36:25.869 --rc genhtml_function_coverage=1 00:36:25.869 --rc genhtml_legend=1 00:36:25.869 --rc geninfo_all_blocks=1 00:36:25.869 --rc geninfo_unexecuted_blocks=1 00:36:25.869 00:36:25.869 ' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:25.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.869 --rc genhtml_branch_coverage=1 00:36:25.869 --rc genhtml_function_coverage=1 00:36:25.869 --rc genhtml_legend=1 00:36:25.869 --rc geninfo_all_blocks=1 00:36:25.869 --rc geninfo_unexecuted_blocks=1 00:36:25.869 00:36:25.869 ' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:25.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.869 --rc genhtml_branch_coverage=1 00:36:25.869 --rc genhtml_function_coverage=1 00:36:25.869 --rc genhtml_legend=1 00:36:25.869 --rc geninfo_all_blocks=1 00:36:25.869 --rc geninfo_unexecuted_blocks=1 00:36:25.869 00:36:25.869 ' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:25.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.869 --rc genhtml_branch_coverage=1 00:36:25.869 --rc genhtml_function_coverage=1 00:36:25.869 --rc genhtml_legend=1 00:36:25.869 --rc geninfo_all_blocks=1 00:36:25.869 --rc geninfo_unexecuted_blocks=1 00:36:25.869 00:36:25.869 ' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:25.869 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:26.127 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:26.127 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:26.127 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:26.127 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.127 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:26.127 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:26.127 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:26.128 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.128 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.128 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.128 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:26.128 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:26.128 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:26.128 06:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:31.401 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:31.401 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:31.401 Found net devices under 0000:af:00.0: cvl_0_0 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:31.401 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:31.402 Found net devices under 0000:af:00.1: cvl_0_1 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:31.402 06:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:31.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:31.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:36:31.402 00:36:31.402 --- 10.0.0.2 ping statistics --- 00:36:31.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.402 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:31.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:31.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:36:31.402 00:36:31.402 --- 10.0.0.1 ping statistics --- 00:36:31.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.402 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3588414 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3588414 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3588414 ']' 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:31.402 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:31.402 [2024-12-16 06:05:05.174551] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:31.402 [2024-12-16 06:05:05.175432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:31.402 [2024-12-16 06:05:05.175464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.402 [2024-12-16 06:05:05.234834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:31.664 [2024-12-16 06:05:05.274526] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.664 [2024-12-16 06:05:05.274564] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.664 [2024-12-16 06:05:05.274575] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.664 [2024-12-16 06:05:05.274580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.664 [2024-12-16 06:05:05.274585] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.664 [2024-12-16 06:05:05.274625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:31.664 [2024-12-16 06:05:05.274711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:31.664 [2024-12-16 06:05:05.274712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.664 [2024-12-16 06:05:05.345080] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:31.664 [2024-12-16 06:05:05.345166] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:31.664 [2024-12-16 06:05:05.345373] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:31.664 [2024-12-16 06:05:05.345440] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:31.664 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:31.923 [2024-12-16 06:05:05.559206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.923 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:31.923 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.181 [2024-12-16 06:05:05.927462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.182 06:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:32.440 06:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:32.699 Malloc0 00:36:32.699 06:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:32.699 Delay0 00:36:32.699 06:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.959 06:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:33.219 NULL1 00:36:33.219 06:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:33.219 06:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:33.219 06:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3588670 00:36:33.219 06:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:33.219 06:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.595 Read completed with error (sct=0, sc=11) 00:36:34.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.595 06:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:34.595 06:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:34.595 06:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:34.854 true 00:36:34.854 06:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:34.854 06:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.789 06:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.789 06:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:35.789 06:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:36.048 true 00:36:36.048 06:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:36.048 06:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.307 06:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.565 06:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:36.565 06:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:36.565 true 00:36:36.565 06:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:36.565 06:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.941 06:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.941 06:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:37.941 06:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:38.200 true 00:36:38.200 06:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:38.200 06:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.138 06:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.138 06:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:39.138 06:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:39.396 true 00:36:39.396 06:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:39.396 06:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.655 06:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.655 06:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:39.655 06:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:39.913 true 00:36:39.913 06:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:39.913 06:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.290 06:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:41.290 06:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:41.290 06:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:41.548 true 00:36:41.548 06:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:41.548 06:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.483 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.483 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:42.483 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:42.742 true 00:36:42.742 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:42.742 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.000 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.000 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:43.000 06:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:43.259 true 00:36:43.259 06:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:43.259 06:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.635 06:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.635 06:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:44.635 06:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:44.893 true 00:36:44.893 06:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:44.893 06:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.828 06:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.828 06:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:45.829 06:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:46.087 true 00:36:46.087 06:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:46.087 06:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.087 06:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.346 06:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:46.346 06:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:46.604 true 00:36:46.604 06:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:46.604 06:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.981 06:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.981 06:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:47.981 06:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:48.240 true 00:36:48.240 06:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:48.240 06:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:49.175 06:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.175 06:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:49.175 06:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:49.433 true 00:36:49.433 06:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:49.433 06:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.433 06:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.692 06:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:49.692 06:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:49.951 true 00:36:49.951 06:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:49.951 06:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.327 06:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.327 06:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:51.327 06:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:51.585 true 00:36:51.585 06:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:51.585 06:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.519 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.519 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:52.519 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:52.777 true 00:36:52.777 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:52.777 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.036 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.036 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:53.036 06:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:53.294 true 00:36:53.294 06:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:53.294 06:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.229 06:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.487 06:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:54.487 06:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:54.746 true 00:36:54.746 06:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:54.746 06:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.680 06:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.680 06:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:55.680 06:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:55.939 true 00:36:55.939 06:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:55.939 06:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.197 06:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.197 06:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:56.197 06:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:56.455 true 00:36:56.455 06:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:56.455 06:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.830 06:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.830 06:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:57.830 06:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:58.088 true 00:36:58.088 06:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:58.088 06:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.022 06:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.022 06:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:59.022 06:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:59.280 true 00:36:59.280 06:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:59.280 06:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.537 06:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.795 06:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:59.795 06:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:59.795 true 00:36:59.795 06:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:36:59.795 06:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.170 06:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.170 06:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:01.170 06:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:01.428 true 00:37:01.428 06:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:37:01.428 06:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.360 06:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.360 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:02.360 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:02.618 true 00:37:02.619 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:37:02.619 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.877 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.135 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:03.135 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:03.135 true 00:37:03.135 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:37:03.135 06:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.509 Initializing NVMe Controllers 00:37:04.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:04.509 Controller IO queue size 128, less than required. 00:37:04.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:04.509 Controller IO queue size 128, less than required. 00:37:04.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:04.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:04.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:04.509 Initialization complete. Launching workers. 00:37:04.509 ======================================================== 00:37:04.509 Latency(us) 00:37:04.509 Device Information : IOPS MiB/s Average min max 00:37:04.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2137.00 1.04 43506.78 2008.94 1013564.86 00:37:04.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18602.70 9.08 6880.42 1208.46 448149.09 00:37:04.510 ======================================================== 00:37:04.510 Total : 20739.70 10.13 10654.37 1208.46 1013564.86 00:37:04.510 00:37:04.510 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.510 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:04.510 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:04.768 true 00:37:04.768 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3588670 00:37:04.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3588670) - No such process 00:37:04.768 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3588670 00:37:04.768 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.768 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:05.026 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:05.026 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:05.026 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:05.026 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:05.026 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:05.285 null0 00:37:05.285 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:05.285 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:05.285 06:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:05.544 null1 00:37:05.544 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:05.544 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:05.544 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:05.544 null2 00:37:05.544 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:05.544 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:05.544 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:05.802 null3 00:37:05.802 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:05.802 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:05.802 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:06.061 null4 00:37:06.061 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.061 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.061 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:06.061 null5 00:37:06.061 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.061 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.061 06:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:06.319 null6 00:37:06.319 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.319 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.319 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:06.578 null7 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.578 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3593887 3593889 3593892 3593895 3593898 3593901 3593904 3593906 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:06.579 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.881 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:07.170 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:07.171 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:07.171 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:07.171 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:07.171 06:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:07.504 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:07.792 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.792 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.792 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.793 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.052 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.311 06:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:08.311 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.570 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:08.829 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.087 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.088 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.346 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.346 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.346 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.346 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.346 06:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.346 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.605 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.863 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.864 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.864 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.864 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.864 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.864 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.864 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.864 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.122 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:10.123 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:10.123 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.123 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.382 06:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:10.382 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:10.641 rmmod nvme_tcp 00:37:10.641 rmmod nvme_fabrics 00:37:10.641 rmmod nvme_keyring 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3588414 ']' 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3588414 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3588414 ']' 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3588414 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:10.641 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3588414 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3588414' 00:37:10.901 killing process with pid 3588414 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3588414 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3588414 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.901 06:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:13.444 00:37:13.444 real 0m47.259s 00:37:13.444 user 2m56.802s 00:37:13.444 sys 0m20.340s 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:13.444 ************************************ 00:37:13.444 END TEST nvmf_ns_hotplug_stress 00:37:13.444 ************************************ 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:13.444 ************************************ 00:37:13.444 START TEST nvmf_delete_subsystem 00:37:13.444 ************************************ 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:13.444 * Looking for test storage... 00:37:13.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:13.444 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:13.445 06:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:13.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.445 --rc genhtml_branch_coverage=1 00:37:13.445 --rc genhtml_function_coverage=1 00:37:13.445 --rc genhtml_legend=1 00:37:13.445 --rc geninfo_all_blocks=1 00:37:13.445 --rc geninfo_unexecuted_blocks=1 00:37:13.445 00:37:13.445 ' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:13.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.445 --rc genhtml_branch_coverage=1 00:37:13.445 --rc genhtml_function_coverage=1 00:37:13.445 --rc genhtml_legend=1 00:37:13.445 --rc geninfo_all_blocks=1 00:37:13.445 --rc geninfo_unexecuted_blocks=1 00:37:13.445 00:37:13.445 ' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:13.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.445 --rc genhtml_branch_coverage=1 00:37:13.445 --rc genhtml_function_coverage=1 00:37:13.445 --rc genhtml_legend=1 00:37:13.445 --rc geninfo_all_blocks=1 00:37:13.445 --rc geninfo_unexecuted_blocks=1 00:37:13.445 00:37:13.445 ' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:13.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.445 --rc genhtml_branch_coverage=1 00:37:13.445 --rc genhtml_function_coverage=1 00:37:13.445 --rc genhtml_legend=1 00:37:13.445 --rc geninfo_all_blocks=1 00:37:13.445 --rc geninfo_unexecuted_blocks=1 00:37:13.445 00:37:13.445 ' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:13.445 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:13.446 06:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:18.721 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:18.722 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:18.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:18.722 Found net devices under 0000:af:00.0: cvl_0_0 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:18.722 Found net devices under 0000:af:00.1: cvl_0_1 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:18.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:18.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:37:18.722 00:37:18.722 --- 10.0.0.2 ping statistics --- 00:37:18.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.722 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:37:18.722 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:18.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:18.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:37:18.723 00:37:18.723 --- 10.0.0.1 ping statistics --- 00:37:18.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.723 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3598156 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3598156 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3598156 ']' 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:18.723 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.723 [2024-12-16 06:05:52.503473] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:18.723 [2024-12-16 06:05:52.504437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:18.723 [2024-12-16 06:05:52.504475] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.723 [2024-12-16 06:05:52.559418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:18.982 [2024-12-16 06:05:52.599070] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.982 [2024-12-16 06:05:52.599108] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.982 [2024-12-16 06:05:52.599114] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.982 [2024-12-16 06:05:52.599120] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.982 [2024-12-16 06:05:52.599125] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.982 [2024-12-16 06:05:52.599215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.982 [2024-12-16 06:05:52.599218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.982 [2024-12-16 06:05:52.659552] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:18.982 [2024-12-16 06:05:52.659905] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:18.982 [2024-12-16 06:05:52.659923] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.982 [2024-12-16 06:05:52.735984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.982 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.983 [2024-12-16 06:05:52.776114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.983 NULL1 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.983 Delay0 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3598179 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:18.983 06:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:19.242 [2024-12-16 06:05:52.862238] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:21.144 06:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:21.144 06:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.144 06:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 starting I/O failed: -6 00:37:21.144 Write completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.144 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 [2024-12-16 06:05:54.897942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6fc00cfe0 is same with the state(6) to be set 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 starting I/O failed: -6 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 [2024-12-16 06:05:54.898535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eded0 is same with the state(6) to be set 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Write completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 Read completed with error (sct=0, sc=8) 00:37:21.145 [2024-12-16 06:05:54.898733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6fc000c00 is same with the state(6) to be set 00:37:22.080 [2024-12-16 06:05:55.875478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ebb20 is same with the state(6) to be set 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 [2024-12-16 06:05:55.899798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ecc50 is same with the state(6) to be set 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Write completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.080 [2024-12-16 06:05:55.899957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eca70 is same with the state(6) to be set 00:37:22.080 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 [2024-12-16 06:05:55.900120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ee0b0 is same with the state(6) to be set 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Write completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 Read completed with error (sct=0, sc=8) 00:37:22.081 [2024-12-16 06:05:55.901067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6fc00d310 is same with the state(6) to be set 00:37:22.081 Initializing NVMe Controllers 00:37:22.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:22.081 Controller IO queue size 128, less than required. 00:37:22.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:22.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:22.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:22.081 Initialization complete. Launching workers. 00:37:22.081 ======================================================== 00:37:22.081 Latency(us) 00:37:22.081 Device Information : IOPS MiB/s Average min max 00:37:22.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.62 0.10 943148.23 2115.62 1010750.74 00:37:22.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.89 0.08 866888.98 432.43 1010342.09 00:37:22.081 ======================================================== 00:37:22.081 Total : 353.50 0.17 909088.62 432.43 1010750.74 00:37:22.081 00:37:22.081 [2024-12-16 06:05:55.901490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ebb20 (9): Bad file descriptor 00:37:22.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:22.081 06:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.081 06:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:22.081 06:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3598179 00:37:22.081 06:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:22.648 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:22.648 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3598179 00:37:22.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3598179) - No such process 00:37:22.648 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3598179 00:37:22.648 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:22.648 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3598179 00:37:22.648 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:22.648 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3598179 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:22.649 [2024-12-16 06:05:56.432013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3598845 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:22.649 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:22.649 [2024-12-16 06:05:56.491696] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:23.216 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:23.216 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:23.216 06:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:23.783 06:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:23.783 06:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:23.783 06:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:24.350 06:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:24.350 06:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:24.350 06:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:24.917 06:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:24.917 06:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:24.917 06:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:25.176 06:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:25.176 06:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:25.176 06:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:25.742 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:25.742 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:25.742 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:26.001 Initializing NVMe Controllers 00:37:26.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:26.001 Controller IO queue size 128, less than required. 00:37:26.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:26.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:26.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:26.001 Initialization complete. Launching workers. 00:37:26.001 ======================================================== 00:37:26.001 Latency(us) 00:37:26.001 Device Information : IOPS MiB/s Average min max 00:37:26.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002902.56 1000149.72 1009476.14 00:37:26.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005413.24 1000363.46 1040953.05 00:37:26.001 ======================================================== 00:37:26.001 Total : 256.00 0.12 1004157.90 1000149.72 1040953.05 00:37:26.001 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3598845 00:37:26.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3598845) - No such process 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3598845 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:26.260 06:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:26.260 rmmod nvme_tcp 00:37:26.260 rmmod nvme_fabrics 00:37:26.260 rmmod nvme_keyring 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3598156 ']' 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3598156 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3598156 ']' 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3598156 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3598156 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3598156' 00:37:26.260 killing process with pid 3598156 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3598156 00:37:26.260 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3598156 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:26.519 06:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.054 00:37:29.054 real 0m15.518s 00:37:29.054 user 0m25.926s 00:37:29.054 sys 0m5.523s 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:29.054 ************************************ 00:37:29.054 END TEST nvmf_delete_subsystem 00:37:29.054 ************************************ 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:29.054 ************************************ 00:37:29.054 START TEST nvmf_host_management 00:37:29.054 ************************************ 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:29.054 * Looking for test storage... 00:37:29.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:29.054 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:29.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.055 --rc genhtml_branch_coverage=1 00:37:29.055 --rc genhtml_function_coverage=1 00:37:29.055 --rc genhtml_legend=1 00:37:29.055 --rc geninfo_all_blocks=1 00:37:29.055 --rc geninfo_unexecuted_blocks=1 00:37:29.055 00:37:29.055 ' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:29.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.055 --rc genhtml_branch_coverage=1 00:37:29.055 --rc genhtml_function_coverage=1 00:37:29.055 --rc genhtml_legend=1 00:37:29.055 --rc geninfo_all_blocks=1 00:37:29.055 --rc geninfo_unexecuted_blocks=1 00:37:29.055 00:37:29.055 ' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:29.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.055 --rc genhtml_branch_coverage=1 00:37:29.055 --rc genhtml_function_coverage=1 00:37:29.055 --rc genhtml_legend=1 00:37:29.055 --rc geninfo_all_blocks=1 00:37:29.055 --rc geninfo_unexecuted_blocks=1 00:37:29.055 00:37:29.055 ' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:29.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.055 --rc genhtml_branch_coverage=1 00:37:29.055 --rc genhtml_function_coverage=1 00:37:29.055 --rc genhtml_legend=1 00:37:29.055 --rc geninfo_all_blocks=1 00:37:29.055 --rc geninfo_unexecuted_blocks=1 00:37:29.055 00:37:29.055 ' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:29.055 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.056 06:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:34.327 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:34.328 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:34.328 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:34.328 Found net devices under 0000:af:00.0: cvl_0_0 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:34.328 Found net devices under 0000:af:00.1: cvl_0_1 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:34.328 06:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:34.328 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:34.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:34.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:37:34.328 00:37:34.328 --- 10.0.0.2 ping statistics --- 00:37:34.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.328 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:37:34.328 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:34.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:34.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:37:34.328 00:37:34.328 --- 10.0.0.1 ping statistics --- 00:37:34.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.328 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3603275 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3603275 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3603275 ']' 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:34.329 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.329 [2024-12-16 06:06:08.107669] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:34.329 [2024-12-16 06:06:08.108558] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:34.329 [2024-12-16 06:06:08.108591] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:34.329 [2024-12-16 06:06:08.168740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:34.588 [2024-12-16 06:06:08.209945] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.588 [2024-12-16 06:06:08.209983] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.588 [2024-12-16 06:06:08.209990] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.588 [2024-12-16 06:06:08.209996] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.588 [2024-12-16 06:06:08.210001] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.588 [2024-12-16 06:06:08.210108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:34.588 [2024-12-16 06:06:08.210201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:34.588 [2024-12-16 06:06:08.210308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.588 [2024-12-16 06:06:08.210310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:34.589 [2024-12-16 06:06:08.282309] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:34.589 [2024-12-16 06:06:08.282530] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:34.589 [2024-12-16 06:06:08.283009] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:34.589 [2024-12-16 06:06:08.283030] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:34.589 [2024-12-16 06:06:08.283332] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.589 [2024-12-16 06:06:08.346942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.589 Malloc0 00:37:34.589 [2024-12-16 06:06:08.414994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:34.589 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3603322 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3603322 /var/tmp/bdevperf.sock 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3603322 ']' 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:34.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:34.848 { 00:37:34.848 "params": { 00:37:34.848 "name": "Nvme$subsystem", 00:37:34.848 "trtype": "$TEST_TRANSPORT", 00:37:34.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.848 "adrfam": "ipv4", 00:37:34.848 "trsvcid": "$NVMF_PORT", 00:37:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.848 "hdgst": ${hdgst:-false}, 00:37:34.848 "ddgst": ${ddgst:-false} 00:37:34.848 }, 00:37:34.848 "method": "bdev_nvme_attach_controller" 00:37:34.848 } 00:37:34.848 EOF 00:37:34.848 )") 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:34.848 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:34.848 "params": { 00:37:34.848 "name": "Nvme0", 00:37:34.848 "trtype": "tcp", 00:37:34.848 "traddr": "10.0.0.2", 00:37:34.848 "adrfam": "ipv4", 00:37:34.848 "trsvcid": "4420", 00:37:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.848 "hdgst": false, 00:37:34.848 "ddgst": false 00:37:34.848 }, 00:37:34.848 "method": "bdev_nvme_attach_controller" 00:37:34.848 }' 00:37:34.849 [2024-12-16 06:06:08.509031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:34.849 [2024-12-16 06:06:08.509077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603322 ] 00:37:34.849 [2024-12-16 06:06:08.565825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.849 [2024-12-16 06:06:08.604757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.108 Running I/O for 10 seconds... 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=79 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 79 -ge 100 ']' 00:37:35.108 06:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=667 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 667 -ge 100 ']' 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:35.369 [2024-12-16 06:06:09.190746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 [2024-12-16 06:06:09.190888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2537c60 is same with the state(6) to be set 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.369 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:35.369 [2024-12-16 06:06:09.197534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.369 [2024-12-16 06:06:09.197569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.369 [2024-12-16 06:06:09.197585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.369 [2024-12-16 06:06:09.197593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.369 [2024-12-16 06:06:09.197602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.197989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.197996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.370 [2024-12-16 06:06:09.198186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.370 [2024-12-16 06:06:09.198195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.371 [2024-12-16 06:06:09.198507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198571] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cf0d30 was disconnected and freed. reset controller. 00:37:35.371 [2024-12-16 06:06:09.198613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.371 [2024-12-16 06:06:09.198622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.371 [2024-12-16 06:06:09.198635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.371 [2024-12-16 06:06:09.198650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.371 [2024-12-16 06:06:09.198668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.371 [2024-12-16 06:06:09.198674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad7b50 is same with the state(6) to be set 00:37:35.371 [2024-12-16 06:06:09.199534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:35.371 task offset: 98304 on job bdev=Nvme0n1 fails 00:37:35.371 00:37:35.371 Latency(us) 00:37:35.371 [2024-12-16T05:06:09.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.371 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:35.371 Job: Nvme0n1 ended in about 0.41 seconds with error 00:37:35.371 Verification LBA range: start 0x0 length 0x400 00:37:35.371 Nvme0n1 : 0.41 1896.12 118.51 158.01 0.00 30338.69 1700.82 27088.21 00:37:35.371 [2024-12-16T05:06:09.227Z] =================================================================================================================== 00:37:35.371 [2024-12-16T05:06:09.227Z] Total : 1896.12 118.51 158.01 0.00 30338.69 1700.82 27088.21 00:37:35.371 [2024-12-16 06:06:09.201856] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:35.371 [2024-12-16 06:06:09.201877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad7b50 (9): Bad file descriptor 00:37:35.371 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.371 06:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:35.371 [2024-12-16 06:06:09.205043] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:36.749 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3603322 00:37:36.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3603322) - No such process 00:37:36.749 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:36.749 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:36.749 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:36.750 { 00:37:36.750 "params": { 00:37:36.750 "name": "Nvme$subsystem", 00:37:36.750 "trtype": "$TEST_TRANSPORT", 00:37:36.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.750 "adrfam": "ipv4", 00:37:36.750 "trsvcid": "$NVMF_PORT", 00:37:36.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.750 "hdgst": ${hdgst:-false}, 00:37:36.750 "ddgst": ${ddgst:-false} 00:37:36.750 }, 00:37:36.750 "method": "bdev_nvme_attach_controller" 00:37:36.750 } 00:37:36.750 EOF 00:37:36.750 )") 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:36.750 06:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:36.750 "params": { 00:37:36.750 "name": "Nvme0", 00:37:36.750 "trtype": "tcp", 00:37:36.750 "traddr": "10.0.0.2", 00:37:36.750 "adrfam": "ipv4", 00:37:36.750 "trsvcid": "4420", 00:37:36.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.750 "hdgst": false, 00:37:36.750 "ddgst": false 00:37:36.750 }, 00:37:36.750 "method": "bdev_nvme_attach_controller" 00:37:36.750 }' 00:37:36.750 [2024-12-16 06:06:10.259084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:36.750 [2024-12-16 06:06:10.259129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603616 ] 00:37:36.750 [2024-12-16 06:06:10.313999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.750 [2024-12-16 06:06:10.352819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.072 Running I/O for 1 seconds... 00:37:38.007 2002.00 IOPS, 125.12 MiB/s 00:37:38.007 Latency(us) 00:37:38.007 [2024-12-16T05:06:11.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.007 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:38.007 Verification LBA range: start 0x0 length 0x400 00:37:38.007 Nvme0n1 : 1.01 2041.96 127.62 0.00 0.00 30748.17 2137.72 26838.55 00:37:38.007 [2024-12-16T05:06:11.863Z] =================================================================================================================== 00:37:38.007 [2024-12-16T05:06:11.863Z] Total : 2041.96 127.62 0.00 0.00 30748.17 2137.72 26838.55 00:37:38.007 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:38.007 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:38.007 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:38.007 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:38.266 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:38.266 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:38.267 rmmod nvme_tcp 00:37:38.267 rmmod nvme_fabrics 00:37:38.267 rmmod nvme_keyring 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3603275 ']' 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3603275 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3603275 ']' 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3603275 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3603275 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3603275' 00:37:38.267 killing process with pid 3603275 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3603275 00:37:38.267 06:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3603275 00:37:38.526 [2024-12-16 06:06:12.170062] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.526 06:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.430 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:40.430 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:40.430 00:37:40.430 real 0m11.839s 00:37:40.430 user 0m17.859s 00:37:40.430 sys 0m5.936s 00:37:40.430 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:40.430 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:40.430 ************************************ 00:37:40.430 END TEST nvmf_host_management 00:37:40.430 ************************************ 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:40.690 ************************************ 00:37:40.690 START TEST nvmf_lvol 00:37:40.690 ************************************ 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:40.690 * Looking for test storage... 00:37:40.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.690 --rc genhtml_branch_coverage=1 00:37:40.690 --rc genhtml_function_coverage=1 00:37:40.690 --rc genhtml_legend=1 00:37:40.690 --rc geninfo_all_blocks=1 00:37:40.690 --rc geninfo_unexecuted_blocks=1 00:37:40.690 00:37:40.690 ' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.690 --rc genhtml_branch_coverage=1 00:37:40.690 --rc genhtml_function_coverage=1 00:37:40.690 --rc genhtml_legend=1 00:37:40.690 --rc geninfo_all_blocks=1 00:37:40.690 --rc geninfo_unexecuted_blocks=1 00:37:40.690 00:37:40.690 ' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.690 --rc genhtml_branch_coverage=1 00:37:40.690 --rc genhtml_function_coverage=1 00:37:40.690 --rc genhtml_legend=1 00:37:40.690 --rc geninfo_all_blocks=1 00:37:40.690 --rc geninfo_unexecuted_blocks=1 00:37:40.690 00:37:40.690 ' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:40.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.690 --rc genhtml_branch_coverage=1 00:37:40.690 --rc genhtml_function_coverage=1 00:37:40.690 --rc genhtml_legend=1 00:37:40.690 --rc geninfo_all_blocks=1 00:37:40.690 --rc geninfo_unexecuted_blocks=1 00:37:40.690 00:37:40.690 ' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:40.690 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:40.950 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:40.951 06:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:46.221 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:46.221 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:46.221 Found net devices under 0000:af:00.0: cvl_0_0 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.221 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:46.222 Found net devices under 0000:af:00.1: cvl_0_1 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:46.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:46.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:37:46.222 00:37:46.222 --- 10.0.0.2 ping statistics --- 00:37:46.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.222 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:46.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:46.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:37:46.222 00:37:46.222 --- 10.0.0.1 ping statistics --- 00:37:46.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.222 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3607254 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3607254 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3607254 ']' 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:46.222 [2024-12-16 06:06:19.775242] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:46.222 [2024-12-16 06:06:19.776202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:46.222 [2024-12-16 06:06:19.776239] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.222 [2024-12-16 06:06:19.837190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:46.222 [2024-12-16 06:06:19.877641] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.222 [2024-12-16 06:06:19.877679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.222 [2024-12-16 06:06:19.877686] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.222 [2024-12-16 06:06:19.877692] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.222 [2024-12-16 06:06:19.877697] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.222 [2024-12-16 06:06:19.877746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.222 [2024-12-16 06:06:19.877853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.222 [2024-12-16 06:06:19.877865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:46.222 [2024-12-16 06:06:19.947637] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:46.222 [2024-12-16 06:06:19.947789] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:46.222 [2024-12-16 06:06:19.948025] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:46.222 [2024-12-16 06:06:19.948208] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:46.222 06:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:46.222 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.222 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:46.481 [2024-12-16 06:06:20.186297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.481 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:46.740 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:46.740 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:46.998 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:46.998 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:46.998 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:47.257 06:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c94920cc-0d1f-4ea1-aed2-ec14e7b7e0c7 00:37:47.257 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c94920cc-0d1f-4ea1-aed2-ec14e7b7e0c7 lvol 20 00:37:47.515 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a99acdda-720c-4fe2-a69a-ba489287d8c5 00:37:47.515 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:47.515 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a99acdda-720c-4fe2-a69a-ba489287d8c5 00:37:47.774 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:48.033 [2024-12-16 06:06:21.726398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.033 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:48.291 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3607724 00:37:48.291 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:48.291 06:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:49.227 06:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a99acdda-720c-4fe2-a69a-ba489287d8c5 MY_SNAPSHOT 00:37:49.485 06:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3067e507-2004-4b45-a9c3-46c37427c4b4 00:37:49.485 06:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a99acdda-720c-4fe2-a69a-ba489287d8c5 30 00:37:49.744 06:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3067e507-2004-4b45-a9c3-46c37427c4b4 MY_CLONE 00:37:50.002 06:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a27f1096-f56a-4e62-84ff-eb3ce0fb088e 00:37:50.002 06:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a27f1096-f56a-4e62-84ff-eb3ce0fb088e 00:37:50.569 06:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3607724 00:37:58.686 Initializing NVMe Controllers 00:37:58.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:58.686 Controller IO queue size 128, less than required. 00:37:58.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:58.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:58.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:58.686 Initialization complete. Launching workers. 00:37:58.686 ======================================================== 00:37:58.686 Latency(us) 00:37:58.687 Device Information : IOPS MiB/s Average min max 00:37:58.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12252.20 47.86 10452.33 1458.42 78347.29 00:37:58.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12140.30 47.42 10546.91 3001.80 43196.01 00:37:58.687 ======================================================== 00:37:58.687 Total : 24392.50 95.28 10499.41 1458.42 78347.29 00:37:58.687 00:37:58.687 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:58.687 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a99acdda-720c-4fe2-a69a-ba489287d8c5 00:37:59.000 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c94920cc-0d1f-4ea1-aed2-ec14e7b7e0c7 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.317 rmmod nvme_tcp 00:37:59.317 rmmod nvme_fabrics 00:37:59.317 rmmod nvme_keyring 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3607254 ']' 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3607254 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3607254 ']' 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3607254 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:59.317 06:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3607254 00:37:59.317 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:59.317 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:59.317 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3607254' 00:37:59.317 killing process with pid 3607254 00:37:59.317 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3607254 00:37:59.317 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3607254 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.576 06:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.480 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:01.480 00:38:01.480 real 0m20.979s 00:38:01.480 user 0m55.098s 00:38:01.480 sys 0m9.316s 00:38:01.480 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:01.480 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:01.480 ************************************ 00:38:01.480 END TEST nvmf_lvol 00:38:01.480 ************************************ 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:01.739 ************************************ 00:38:01.739 START TEST nvmf_lvs_grow 00:38:01.739 ************************************ 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:01.739 * Looking for test storage... 00:38:01.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.739 --rc genhtml_branch_coverage=1 00:38:01.739 --rc genhtml_function_coverage=1 00:38:01.739 --rc genhtml_legend=1 00:38:01.739 --rc geninfo_all_blocks=1 00:38:01.739 --rc geninfo_unexecuted_blocks=1 00:38:01.739 00:38:01.739 ' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.739 --rc genhtml_branch_coverage=1 00:38:01.739 --rc genhtml_function_coverage=1 00:38:01.739 --rc genhtml_legend=1 00:38:01.739 --rc geninfo_all_blocks=1 00:38:01.739 --rc geninfo_unexecuted_blocks=1 00:38:01.739 00:38:01.739 ' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.739 --rc genhtml_branch_coverage=1 00:38:01.739 --rc genhtml_function_coverage=1 00:38:01.739 --rc genhtml_legend=1 00:38:01.739 --rc geninfo_all_blocks=1 00:38:01.739 --rc geninfo_unexecuted_blocks=1 00:38:01.739 00:38:01.739 ' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.739 --rc genhtml_branch_coverage=1 00:38:01.739 --rc genhtml_function_coverage=1 00:38:01.739 --rc genhtml_legend=1 00:38:01.739 --rc geninfo_all_blocks=1 00:38:01.739 --rc geninfo_unexecuted_blocks=1 00:38:01.739 00:38:01.739 ' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.739 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:01.740 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:01.999 06:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:07.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:07.271 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:07.271 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:07.272 Found net devices under 0000:af:00.0: cvl_0_0 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:07.272 Found net devices under 0000:af:00.1: cvl_0_1 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:07.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:07.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:38:07.272 00:38:07.272 --- 10.0.0.2 ping statistics --- 00:38:07.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.272 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:07.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:07.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:38:07.272 00:38:07.272 --- 10.0.0.1 ping statistics --- 00:38:07.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.272 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3612751 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3612751 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3612751 ']' 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:07.272 06:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:07.272 [2024-12-16 06:06:40.923949] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:07.272 [2024-12-16 06:06:40.924843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:07.272 [2024-12-16 06:06:40.924886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.272 [2024-12-16 06:06:40.983665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.272 [2024-12-16 06:06:41.022459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.273 [2024-12-16 06:06:41.022497] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.273 [2024-12-16 06:06:41.022504] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:07.273 [2024-12-16 06:06:41.022511] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:07.273 [2024-12-16 06:06:41.022516] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.273 [2024-12-16 06:06:41.022536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.273 [2024-12-16 06:06:41.083576] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:07.273 [2024-12-16 06:06:41.083789] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:07.273 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.273 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:38:07.273 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:07.273 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:07.273 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:07.531 [2024-12-16 06:06:41.311255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:07.531 ************************************ 00:38:07.531 START TEST lvs_grow_clean 00:38:07.531 ************************************ 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:07.531 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:07.532 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:07.532 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:07.532 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:07.790 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:07.790 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:08.048 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=56a59a19-2b53-410e-a139-243d59fc183f 00:38:08.048 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:08.048 06:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:08.307 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:08.307 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:08.307 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56a59a19-2b53-410e-a139-243d59fc183f lvol 150 00:38:08.568 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=01bcd263-229b-4247-b8be-897d7db31bb3 00:38:08.568 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:08.568 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:08.568 [2024-12-16 06:06:42.371021] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:08.568 [2024-12-16 06:06:42.371124] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:08.568 true 00:38:08.568 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:08.568 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:08.826 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:08.826 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:09.084 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01bcd263-229b-4247-b8be-897d7db31bb3 00:38:09.084 06:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:09.342 [2024-12-16 06:06:43.079435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.342 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3613234 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3613234 /var/tmp/bdevperf.sock 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3613234 ']' 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:09.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:09.601 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:09.601 [2024-12-16 06:06:43.339660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:09.601 [2024-12-16 06:06:43.339709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613234 ] 00:38:09.601 [2024-12-16 06:06:43.394716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.601 [2024-12-16 06:06:43.434528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.860 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:09.860 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:38:09.860 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:10.119 Nvme0n1 00:38:10.119 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:10.377 [ 00:38:10.377 { 00:38:10.377 "name": "Nvme0n1", 00:38:10.377 "aliases": [ 00:38:10.377 "01bcd263-229b-4247-b8be-897d7db31bb3" 00:38:10.377 ], 00:38:10.377 "product_name": "NVMe disk", 00:38:10.377 "block_size": 4096, 00:38:10.377 "num_blocks": 38912, 00:38:10.377 "uuid": "01bcd263-229b-4247-b8be-897d7db31bb3", 00:38:10.377 "numa_id": 1, 00:38:10.377 "assigned_rate_limits": { 00:38:10.377 "rw_ios_per_sec": 0, 00:38:10.377 "rw_mbytes_per_sec": 0, 00:38:10.377 "r_mbytes_per_sec": 0, 00:38:10.377 "w_mbytes_per_sec": 0 00:38:10.377 }, 00:38:10.377 "claimed": false, 00:38:10.377 "zoned": false, 00:38:10.377 "supported_io_types": { 00:38:10.377 "read": true, 00:38:10.377 "write": true, 00:38:10.377 "unmap": true, 00:38:10.377 "flush": true, 00:38:10.377 "reset": true, 00:38:10.377 "nvme_admin": true, 00:38:10.377 "nvme_io": true, 00:38:10.377 "nvme_io_md": false, 00:38:10.377 "write_zeroes": true, 00:38:10.377 "zcopy": false, 00:38:10.377 "get_zone_info": false, 00:38:10.378 "zone_management": false, 00:38:10.378 "zone_append": false, 00:38:10.378 "compare": true, 00:38:10.378 "compare_and_write": true, 00:38:10.378 "abort": true, 00:38:10.378 "seek_hole": false, 00:38:10.378 "seek_data": false, 00:38:10.378 "copy": true, 00:38:10.378 "nvme_iov_md": false 00:38:10.378 }, 00:38:10.378 "memory_domains": [ 00:38:10.378 { 00:38:10.378 "dma_device_id": "system", 00:38:10.378 "dma_device_type": 1 00:38:10.378 } 00:38:10.378 ], 00:38:10.378 "driver_specific": { 00:38:10.378 "nvme": [ 00:38:10.378 { 00:38:10.378 "trid": { 00:38:10.378 "trtype": "TCP", 00:38:10.378 "adrfam": "IPv4", 00:38:10.378 "traddr": "10.0.0.2", 00:38:10.378 "trsvcid": "4420", 00:38:10.378 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:10.378 }, 00:38:10.378 "ctrlr_data": { 00:38:10.378 "cntlid": 1, 00:38:10.378 "vendor_id": "0x8086", 00:38:10.378 "model_number": "SPDK bdev Controller", 00:38:10.378 "serial_number": "SPDK0", 00:38:10.378 "firmware_revision": "24.09.1", 00:38:10.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:10.378 "oacs": { 00:38:10.378 "security": 0, 00:38:10.378 "format": 0, 00:38:10.378 "firmware": 0, 00:38:10.378 "ns_manage": 0 00:38:10.378 }, 00:38:10.378 "multi_ctrlr": true, 00:38:10.378 "ana_reporting": false 00:38:10.378 }, 00:38:10.378 "vs": { 00:38:10.378 "nvme_version": "1.3" 00:38:10.378 }, 00:38:10.378 "ns_data": { 00:38:10.378 "id": 1, 00:38:10.378 "can_share": true 00:38:10.378 } 00:38:10.378 } 00:38:10.378 ], 00:38:10.378 "mp_policy": "active_passive" 00:38:10.378 } 00:38:10.378 } 00:38:10.378 ] 00:38:10.378 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3613245 00:38:10.378 06:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:10.378 06:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:10.378 Running I/O for 10 seconds... 00:38:11.313 Latency(us) 00:38:11.313 [2024-12-16T05:06:45.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:11.313 Nvme0n1 : 1.00 22655.00 88.50 0.00 0.00 0.00 0.00 0.00 00:38:11.314 [2024-12-16T05:06:45.170Z] =================================================================================================================== 00:38:11.314 [2024-12-16T05:06:45.170Z] Total : 22655.00 88.50 0.00 0.00 0.00 0.00 0.00 00:38:11.314 00:38:12.248 06:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:12.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.248 Nvme0n1 : 2.00 22847.50 89.25 0.00 0.00 0.00 0.00 0.00 00:38:12.248 [2024-12-16T05:06:46.104Z] =================================================================================================================== 00:38:12.248 [2024-12-16T05:06:46.104Z] Total : 22847.50 89.25 0.00 0.00 0.00 0.00 0.00 00:38:12.248 00:38:12.507 true 00:38:12.507 06:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:12.507 06:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:12.765 06:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:12.765 06:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:12.765 06:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3613245 00:38:13.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.333 Nvme0n1 : 3.00 22924.67 89.55 0.00 0.00 0.00 0.00 0.00 00:38:13.333 [2024-12-16T05:06:47.189Z] =================================================================================================================== 00:38:13.333 [2024-12-16T05:06:47.189Z] Total : 22924.67 89.55 0.00 0.00 0.00 0.00 0.00 00:38:13.333 00:38:14.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.268 Nvme0n1 : 4.00 23041.50 90.01 0.00 0.00 0.00 0.00 0.00 00:38:14.268 [2024-12-16T05:06:48.124Z] =================================================================================================================== 00:38:14.268 [2024-12-16T05:06:48.124Z] Total : 23041.50 90.01 0.00 0.00 0.00 0.00 0.00 00:38:14.268 00:38:15.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.644 Nvme0n1 : 5.00 23104.40 90.25 0.00 0.00 0.00 0.00 0.00 00:38:15.644 [2024-12-16T05:06:49.500Z] =================================================================================================================== 00:38:15.644 [2024-12-16T05:06:49.500Z] Total : 23104.40 90.25 0.00 0.00 0.00 0.00 0.00 00:38:15.644 00:38:16.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.580 Nvme0n1 : 6.00 23127.33 90.34 0.00 0.00 0.00 0.00 0.00 00:38:16.580 [2024-12-16T05:06:50.436Z] =================================================================================================================== 00:38:16.580 [2024-12-16T05:06:50.436Z] Total : 23127.33 90.34 0.00 0.00 0.00 0.00 0.00 00:38:16.580 00:38:17.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.516 Nvme0n1 : 7.00 23125.00 90.33 0.00 0.00 0.00 0.00 0.00 00:38:17.516 [2024-12-16T05:06:51.372Z] =================================================================================================================== 00:38:17.516 [2024-12-16T05:06:51.372Z] Total : 23125.00 90.33 0.00 0.00 0.00 0.00 0.00 00:38:17.516 00:38:18.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.453 Nvme0n1 : 8.00 23148.75 90.42 0.00 0.00 0.00 0.00 0.00 00:38:18.453 [2024-12-16T05:06:52.309Z] =================================================================================================================== 00:38:18.453 [2024-12-16T05:06:52.309Z] Total : 23148.75 90.42 0.00 0.00 0.00 0.00 0.00 00:38:18.453 00:38:19.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.389 Nvme0n1 : 9.00 23192.89 90.60 0.00 0.00 0.00 0.00 0.00 00:38:19.389 [2024-12-16T05:06:53.245Z] =================================================================================================================== 00:38:19.389 [2024-12-16T05:06:53.245Z] Total : 23192.89 90.60 0.00 0.00 0.00 0.00 0.00 00:38:19.389 00:38:20.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.326 Nvme0n1 : 10.00 23219.00 90.70 0.00 0.00 0.00 0.00 0.00 00:38:20.326 [2024-12-16T05:06:54.182Z] =================================================================================================================== 00:38:20.326 [2024-12-16T05:06:54.182Z] Total : 23219.00 90.70 0.00 0.00 0.00 0.00 0.00 00:38:20.326 00:38:20.326 00:38:20.326 Latency(us) 00:38:20.326 [2024-12-16T05:06:54.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.326 Nvme0n1 : 10.01 23218.62 90.70 0.00 0.00 5509.44 3229.99 16477.62 00:38:20.326 [2024-12-16T05:06:54.182Z] =================================================================================================================== 00:38:20.326 [2024-12-16T05:06:54.182Z] Total : 23218.62 90.70 0.00 0.00 5509.44 3229.99 16477.62 00:38:20.326 { 00:38:20.326 "results": [ 00:38:20.326 { 00:38:20.326 "job": "Nvme0n1", 00:38:20.326 "core_mask": "0x2", 00:38:20.326 "workload": "randwrite", 00:38:20.326 "status": "finished", 00:38:20.326 "queue_depth": 128, 00:38:20.326 "io_size": 4096, 00:38:20.326 "runtime": 10.005678, 00:38:20.326 "iops": 23218.61646956858, 00:38:20.326 "mibps": 90.69772058425227, 00:38:20.326 "io_failed": 0, 00:38:20.326 "io_timeout": 0, 00:38:20.326 "avg_latency_us": 5509.437433665432, 00:38:20.326 "min_latency_us": 3229.9885714285715, 00:38:20.326 "max_latency_us": 16477.62285714286 00:38:20.326 } 00:38:20.326 ], 00:38:20.326 "core_count": 1 00:38:20.326 } 00:38:20.326 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3613234 00:38:20.326 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3613234 ']' 00:38:20.326 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3613234 00:38:20.326 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:38:20.326 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:20.326 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3613234 00:38:20.585 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:20.585 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:20.585 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3613234' 00:38:20.585 killing process with pid 3613234 00:38:20.585 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3613234 00:38:20.585 Received shutdown signal, test time was about 10.000000 seconds 00:38:20.585 00:38:20.585 Latency(us) 00:38:20.585 [2024-12-16T05:06:54.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.585 [2024-12-16T05:06:54.441Z] =================================================================================================================== 00:38:20.585 [2024-12-16T05:06:54.441Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:20.585 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3613234 00:38:20.585 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:20.844 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:21.103 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:21.103 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:21.362 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:21.362 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:21.362 06:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:21.362 [2024-12-16 06:06:55.135003] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:21.362 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:21.362 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:38:21.362 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:21.362 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.362 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:21.362 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.362 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:21.363 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.363 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:21.363 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:21.363 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:21.363 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:21.621 request: 00:38:21.621 { 00:38:21.621 "uuid": "56a59a19-2b53-410e-a139-243d59fc183f", 00:38:21.621 "method": "bdev_lvol_get_lvstores", 00:38:21.621 "req_id": 1 00:38:21.621 } 00:38:21.621 Got JSON-RPC error response 00:38:21.621 response: 00:38:21.621 { 00:38:21.621 "code": -19, 00:38:21.621 "message": "No such device" 00:38:21.621 } 00:38:21.621 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:38:21.621 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:21.621 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:21.621 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:21.621 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:21.880 aio_bdev 00:38:21.880 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 01bcd263-229b-4247-b8be-897d7db31bb3 00:38:21.880 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=01bcd263-229b-4247-b8be-897d7db31bb3 00:38:21.880 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:21.880 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:38:21.880 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:21.880 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:21.880 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:22.139 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01bcd263-229b-4247-b8be-897d7db31bb3 -t 2000 00:38:22.139 [ 00:38:22.139 { 00:38:22.139 "name": "01bcd263-229b-4247-b8be-897d7db31bb3", 00:38:22.139 "aliases": [ 00:38:22.139 "lvs/lvol" 00:38:22.139 ], 00:38:22.139 "product_name": "Logical Volume", 00:38:22.139 "block_size": 4096, 00:38:22.139 "num_blocks": 38912, 00:38:22.139 "uuid": "01bcd263-229b-4247-b8be-897d7db31bb3", 00:38:22.139 "assigned_rate_limits": { 00:38:22.139 "rw_ios_per_sec": 0, 00:38:22.139 "rw_mbytes_per_sec": 0, 00:38:22.139 "r_mbytes_per_sec": 0, 00:38:22.139 "w_mbytes_per_sec": 0 00:38:22.139 }, 00:38:22.139 "claimed": false, 00:38:22.139 "zoned": false, 00:38:22.139 "supported_io_types": { 00:38:22.139 "read": true, 00:38:22.139 "write": true, 00:38:22.139 "unmap": true, 00:38:22.139 "flush": false, 00:38:22.139 "reset": true, 00:38:22.139 "nvme_admin": false, 00:38:22.139 "nvme_io": false, 00:38:22.139 "nvme_io_md": false, 00:38:22.139 "write_zeroes": true, 00:38:22.139 "zcopy": false, 00:38:22.139 "get_zone_info": false, 00:38:22.139 "zone_management": false, 00:38:22.139 "zone_append": false, 00:38:22.139 "compare": false, 00:38:22.139 "compare_and_write": false, 00:38:22.139 "abort": false, 00:38:22.139 "seek_hole": true, 00:38:22.139 "seek_data": true, 00:38:22.139 "copy": false, 00:38:22.139 "nvme_iov_md": false 00:38:22.139 }, 00:38:22.139 "driver_specific": { 00:38:22.139 "lvol": { 00:38:22.139 "lvol_store_uuid": "56a59a19-2b53-410e-a139-243d59fc183f", 00:38:22.139 "base_bdev": "aio_bdev", 00:38:22.139 "thin_provision": false, 00:38:22.139 "num_allocated_clusters": 38, 00:38:22.139 "snapshot": false, 00:38:22.139 "clone": false, 00:38:22.139 "esnap_clone": false 00:38:22.139 } 00:38:22.139 } 00:38:22.139 } 00:38:22.139 ] 00:38:22.139 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:38:22.139 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:22.139 06:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:22.398 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:22.398 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:22.398 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:22.657 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:22.657 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01bcd263-229b-4247-b8be-897d7db31bb3 00:38:22.657 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56a59a19-2b53-410e-a139-243d59fc183f 00:38:22.916 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:23.175 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.175 00:38:23.175 real 0m15.574s 00:38:23.175 user 0m15.090s 00:38:23.175 sys 0m1.467s 00:38:23.175 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:23.175 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:23.175 ************************************ 00:38:23.175 END TEST lvs_grow_clean 00:38:23.175 ************************************ 00:38:23.175 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:23.175 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:23.175 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:23.175 06:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:23.175 ************************************ 00:38:23.175 START TEST lvs_grow_dirty 00:38:23.175 ************************************ 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:23.175 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:23.434 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:23.434 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:23.692 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:23.693 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:23.693 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:23.951 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:23.951 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:23.951 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cfc42aa3-e4c5-4235-b780-36df91256b5d lvol 150 00:38:24.211 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=16826342-e22e-46dd-b62c-21ab723150c3 00:38:24.211 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:24.211 06:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:24.211 [2024-12-16 06:06:57.998867] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:24.211 [2024-12-16 06:06:57.998952] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:24.211 true 00:38:24.211 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:24.211 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:24.469 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:24.470 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:24.728 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16826342-e22e-46dd-b62c-21ab723150c3 00:38:24.728 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:24.987 [2024-12-16 06:06:58.751370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.987 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3615719 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3615719 /var/tmp/bdevperf.sock 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3615719 ']' 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:25.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:25.246 06:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:25.246 [2024-12-16 06:06:58.987173] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:25.247 [2024-12-16 06:06:58.987218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615719 ] 00:38:25.247 [2024-12-16 06:06:59.041765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.247 [2024-12-16 06:06:59.082183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.506 06:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:25.506 06:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:25.506 06:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:25.765 Nvme0n1 00:38:25.765 06:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:26.024 [ 00:38:26.024 { 00:38:26.024 "name": "Nvme0n1", 00:38:26.024 "aliases": [ 00:38:26.024 "16826342-e22e-46dd-b62c-21ab723150c3" 00:38:26.024 ], 00:38:26.024 "product_name": "NVMe disk", 00:38:26.024 "block_size": 4096, 00:38:26.024 "num_blocks": 38912, 00:38:26.024 "uuid": "16826342-e22e-46dd-b62c-21ab723150c3", 00:38:26.024 "numa_id": 1, 00:38:26.024 "assigned_rate_limits": { 00:38:26.024 "rw_ios_per_sec": 0, 00:38:26.024 "rw_mbytes_per_sec": 0, 00:38:26.024 "r_mbytes_per_sec": 0, 00:38:26.024 "w_mbytes_per_sec": 0 00:38:26.024 }, 00:38:26.024 "claimed": false, 00:38:26.024 "zoned": false, 00:38:26.024 "supported_io_types": { 00:38:26.024 "read": true, 00:38:26.024 "write": true, 00:38:26.024 "unmap": true, 00:38:26.024 "flush": true, 00:38:26.024 "reset": true, 00:38:26.024 "nvme_admin": true, 00:38:26.024 "nvme_io": true, 00:38:26.024 "nvme_io_md": false, 00:38:26.024 "write_zeroes": true, 00:38:26.024 "zcopy": false, 00:38:26.024 "get_zone_info": false, 00:38:26.024 "zone_management": false, 00:38:26.024 "zone_append": false, 00:38:26.024 "compare": true, 00:38:26.024 "compare_and_write": true, 00:38:26.024 "abort": true, 00:38:26.024 "seek_hole": false, 00:38:26.024 "seek_data": false, 00:38:26.024 "copy": true, 00:38:26.024 "nvme_iov_md": false 00:38:26.024 }, 00:38:26.024 "memory_domains": [ 00:38:26.024 { 00:38:26.024 "dma_device_id": "system", 00:38:26.024 "dma_device_type": 1 00:38:26.024 } 00:38:26.024 ], 00:38:26.024 "driver_specific": { 00:38:26.024 "nvme": [ 00:38:26.024 { 00:38:26.024 "trid": { 00:38:26.024 "trtype": "TCP", 00:38:26.024 "adrfam": "IPv4", 00:38:26.024 "traddr": "10.0.0.2", 00:38:26.024 "trsvcid": "4420", 00:38:26.024 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:26.024 }, 00:38:26.024 "ctrlr_data": { 00:38:26.024 "cntlid": 1, 00:38:26.024 "vendor_id": "0x8086", 00:38:26.024 "model_number": "SPDK bdev Controller", 00:38:26.024 "serial_number": "SPDK0", 00:38:26.024 "firmware_revision": "24.09.1", 00:38:26.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.024 "oacs": { 00:38:26.024 "security": 0, 00:38:26.024 "format": 0, 00:38:26.024 "firmware": 0, 00:38:26.024 "ns_manage": 0 00:38:26.024 }, 00:38:26.024 "multi_ctrlr": true, 00:38:26.024 "ana_reporting": false 00:38:26.024 }, 00:38:26.024 "vs": { 00:38:26.024 "nvme_version": "1.3" 00:38:26.024 }, 00:38:26.024 "ns_data": { 00:38:26.024 "id": 1, 00:38:26.024 "can_share": true 00:38:26.024 } 00:38:26.024 } 00:38:26.024 ], 00:38:26.024 "mp_policy": "active_passive" 00:38:26.024 } 00:38:26.024 } 00:38:26.024 ] 00:38:26.024 06:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3615756 00:38:26.024 06:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:26.024 06:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:26.024 Running I/O for 10 seconds... 00:38:26.960 Latency(us) 00:38:26.960 [2024-12-16T05:07:00.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.960 Nvme0n1 : 1.00 22581.00 88.21 0.00 0.00 0.00 0.00 0.00 00:38:26.960 [2024-12-16T05:07:00.816Z] =================================================================================================================== 00:38:26.960 [2024-12-16T05:07:00.816Z] Total : 22581.00 88.21 0.00 0.00 0.00 0.00 0.00 00:38:26.960 00:38:27.897 06:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:28.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.156 Nvme0n1 : 2.00 22856.00 89.28 0.00 0.00 0.00 0.00 0.00 00:38:28.156 [2024-12-16T05:07:02.012Z] =================================================================================================================== 00:38:28.156 [2024-12-16T05:07:02.012Z] Total : 22856.00 89.28 0.00 0.00 0.00 0.00 0.00 00:38:28.156 00:38:28.156 true 00:38:28.156 06:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:28.156 06:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:28.415 06:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:28.415 06:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:28.415 06:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3615756 00:38:28.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.983 Nvme0n1 : 3.00 22995.33 89.83 0.00 0.00 0.00 0.00 0.00 00:38:28.983 [2024-12-16T05:07:02.839Z] =================================================================================================================== 00:38:28.983 [2024-12-16T05:07:02.839Z] Total : 22995.33 89.83 0.00 0.00 0.00 0.00 0.00 00:38:28.983 00:38:29.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.920 Nvme0n1 : 4.00 23097.25 90.22 0.00 0.00 0.00 0.00 0.00 00:38:29.920 [2024-12-16T05:07:03.776Z] =================================================================================================================== 00:38:29.920 [2024-12-16T05:07:03.777Z] Total : 23097.25 90.22 0.00 0.00 0.00 0.00 0.00 00:38:29.921 00:38:31.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.299 Nvme0n1 : 5.00 23179.20 90.54 0.00 0.00 0.00 0.00 0.00 00:38:31.299 [2024-12-16T05:07:05.155Z] =================================================================================================================== 00:38:31.299 [2024-12-16T05:07:05.155Z] Total : 23179.20 90.54 0.00 0.00 0.00 0.00 0.00 00:38:31.299 00:38:32.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.235 Nvme0n1 : 6.00 23246.33 90.81 0.00 0.00 0.00 0.00 0.00 00:38:32.235 [2024-12-16T05:07:06.091Z] =================================================================================================================== 00:38:32.235 [2024-12-16T05:07:06.091Z] Total : 23246.33 90.81 0.00 0.00 0.00 0.00 0.00 00:38:32.235 00:38:33.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.172 Nvme0n1 : 7.00 23287.00 90.96 0.00 0.00 0.00 0.00 0.00 00:38:33.172 [2024-12-16T05:07:07.028Z] =================================================================================================================== 00:38:33.172 [2024-12-16T05:07:07.028Z] Total : 23287.00 90.96 0.00 0.00 0.00 0.00 0.00 00:38:33.172 00:38:34.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:34.109 Nvme0n1 : 8.00 23303.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:34.109 [2024-12-16T05:07:07.965Z] =================================================================================================================== 00:38:34.109 [2024-12-16T05:07:07.965Z] Total : 23303.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:34.109 00:38:35.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:35.046 Nvme0n1 : 9.00 23330.44 91.13 0.00 0.00 0.00 0.00 0.00 00:38:35.046 [2024-12-16T05:07:08.902Z] =================================================================================================================== 00:38:35.046 [2024-12-16T05:07:08.902Z] Total : 23330.44 91.13 0.00 0.00 0.00 0.00 0.00 00:38:35.046 00:38:35.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:35.982 Nvme0n1 : 10.00 23338.70 91.17 0.00 0.00 0.00 0.00 0.00 00:38:35.982 [2024-12-16T05:07:09.838Z] =================================================================================================================== 00:38:35.982 [2024-12-16T05:07:09.838Z] Total : 23338.70 91.17 0.00 0.00 0.00 0.00 0.00 00:38:35.982 00:38:35.982 00:38:35.982 Latency(us) 00:38:35.982 [2024-12-16T05:07:09.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:35.982 Nvme0n1 : 10.00 23344.62 91.19 0.00 0.00 5480.26 3105.16 16477.62 00:38:35.982 [2024-12-16T05:07:09.838Z] =================================================================================================================== 00:38:35.982 [2024-12-16T05:07:09.838Z] Total : 23344.62 91.19 0.00 0.00 5480.26 3105.16 16477.62 00:38:35.982 { 00:38:35.982 "results": [ 00:38:35.982 { 00:38:35.982 "job": "Nvme0n1", 00:38:35.982 "core_mask": "0x2", 00:38:35.982 "workload": "randwrite", 00:38:35.982 "status": "finished", 00:38:35.982 "queue_depth": 128, 00:38:35.982 "io_size": 4096, 00:38:35.982 "runtime": 10.002948, 00:38:35.982 "iops": 23344.61800661165, 00:38:35.982 "mibps": 91.18991408832676, 00:38:35.982 "io_failed": 0, 00:38:35.982 "io_timeout": 0, 00:38:35.982 "avg_latency_us": 5480.264383024238, 00:38:35.982 "min_latency_us": 3105.158095238095, 00:38:35.982 "max_latency_us": 16477.62285714286 00:38:35.982 } 00:38:35.982 ], 00:38:35.982 "core_count": 1 00:38:35.982 } 00:38:35.982 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3615719 00:38:35.982 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3615719 ']' 00:38:35.982 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3615719 00:38:35.982 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:38:35.982 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:35.982 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3615719 00:38:36.241 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:36.241 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:36.241 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3615719' 00:38:36.241 killing process with pid 3615719 00:38:36.241 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3615719 00:38:36.241 Received shutdown signal, test time was about 10.000000 seconds 00:38:36.241 00:38:36.241 Latency(us) 00:38:36.241 [2024-12-16T05:07:10.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.241 [2024-12-16T05:07:10.097Z] =================================================================================================================== 00:38:36.241 [2024-12-16T05:07:10.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:36.241 06:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3615719 00:38:36.241 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:36.499 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:36.757 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:36.758 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3612751 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3612751 00:38:37.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3612751 Killed "${NVMF_APP[@]}" "$@" 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3617538 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3617538 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3617538 ']' 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:37.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:37.017 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:37.017 [2024-12-16 06:07:10.745979] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:37.017 [2024-12-16 06:07:10.746933] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:37.017 [2024-12-16 06:07:10.746968] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:37.017 [2024-12-16 06:07:10.807160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.017 [2024-12-16 06:07:10.846408] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:37.017 [2024-12-16 06:07:10.846445] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:37.017 [2024-12-16 06:07:10.846452] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:37.017 [2024-12-16 06:07:10.846458] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:37.017 [2024-12-16 06:07:10.846466] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:37.017 [2024-12-16 06:07:10.846503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.276 [2024-12-16 06:07:10.907865] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:37.276 [2024-12-16 06:07:10.908097] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:37.276 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:37.276 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:37.276 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:37.276 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:37.276 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:37.276 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:37.276 06:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:37.535 [2024-12-16 06:07:11.137468] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:37.535 [2024-12-16 06:07:11.137577] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:37.535 [2024-12-16 06:07:11.137618] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 16826342-e22e-46dd-b62c-21ab723150c3 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=16826342-e22e-46dd-b62c-21ab723150c3 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:37.535 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16826342-e22e-46dd-b62c-21ab723150c3 -t 2000 00:38:37.793 [ 00:38:37.793 { 00:38:37.794 "name": "16826342-e22e-46dd-b62c-21ab723150c3", 00:38:37.794 "aliases": [ 00:38:37.794 "lvs/lvol" 00:38:37.794 ], 00:38:37.794 "product_name": "Logical Volume", 00:38:37.794 "block_size": 4096, 00:38:37.794 "num_blocks": 38912, 00:38:37.794 "uuid": "16826342-e22e-46dd-b62c-21ab723150c3", 00:38:37.794 "assigned_rate_limits": { 00:38:37.794 "rw_ios_per_sec": 0, 00:38:37.794 "rw_mbytes_per_sec": 0, 00:38:37.794 "r_mbytes_per_sec": 0, 00:38:37.794 "w_mbytes_per_sec": 0 00:38:37.794 }, 00:38:37.794 "claimed": false, 00:38:37.794 "zoned": false, 00:38:37.794 "supported_io_types": { 00:38:37.794 "read": true, 00:38:37.794 "write": true, 00:38:37.794 "unmap": true, 00:38:37.794 "flush": false, 00:38:37.794 "reset": true, 00:38:37.794 "nvme_admin": false, 00:38:37.794 "nvme_io": false, 00:38:37.794 "nvme_io_md": false, 00:38:37.794 "write_zeroes": true, 00:38:37.794 "zcopy": false, 00:38:37.794 "get_zone_info": false, 00:38:37.794 "zone_management": false, 00:38:37.794 "zone_append": false, 00:38:37.794 "compare": false, 00:38:37.794 "compare_and_write": false, 00:38:37.794 "abort": false, 00:38:37.794 "seek_hole": true, 00:38:37.794 "seek_data": true, 00:38:37.794 "copy": false, 00:38:37.794 "nvme_iov_md": false 00:38:37.794 }, 00:38:37.794 "driver_specific": { 00:38:37.794 "lvol": { 00:38:37.794 "lvol_store_uuid": "cfc42aa3-e4c5-4235-b780-36df91256b5d", 00:38:37.794 "base_bdev": "aio_bdev", 00:38:37.794 "thin_provision": false, 00:38:37.794 "num_allocated_clusters": 38, 00:38:37.794 "snapshot": false, 00:38:37.794 "clone": false, 00:38:37.794 "esnap_clone": false 00:38:37.794 } 00:38:37.794 } 00:38:37.794 } 00:38:37.794 ] 00:38:37.794 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:37.794 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:37.794 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:38.052 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:38.052 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:38.052 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:38.311 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:38.311 06:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:38.311 [2024-12-16 06:07:12.082967] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:38.311 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:38.311 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:38.311 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:38.311 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.311 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:38.311 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.312 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:38.312 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.312 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:38.312 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:38.312 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:38.312 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:38.571 request: 00:38:38.571 { 00:38:38.571 "uuid": "cfc42aa3-e4c5-4235-b780-36df91256b5d", 00:38:38.571 "method": "bdev_lvol_get_lvstores", 00:38:38.571 "req_id": 1 00:38:38.571 } 00:38:38.571 Got JSON-RPC error response 00:38:38.571 response: 00:38:38.571 { 00:38:38.571 "code": -19, 00:38:38.571 "message": "No such device" 00:38:38.571 } 00:38:38.571 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:38.571 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:38.571 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:38.571 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:38.571 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:38.829 aio_bdev 00:38:38.829 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 16826342-e22e-46dd-b62c-21ab723150c3 00:38:38.829 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=16826342-e22e-46dd-b62c-21ab723150c3 00:38:38.830 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:38.830 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:38.830 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:38.830 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:38.830 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:38.830 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16826342-e22e-46dd-b62c-21ab723150c3 -t 2000 00:38:39.088 [ 00:38:39.088 { 00:38:39.088 "name": "16826342-e22e-46dd-b62c-21ab723150c3", 00:38:39.089 "aliases": [ 00:38:39.089 "lvs/lvol" 00:38:39.089 ], 00:38:39.089 "product_name": "Logical Volume", 00:38:39.089 "block_size": 4096, 00:38:39.089 "num_blocks": 38912, 00:38:39.089 "uuid": "16826342-e22e-46dd-b62c-21ab723150c3", 00:38:39.089 "assigned_rate_limits": { 00:38:39.089 "rw_ios_per_sec": 0, 00:38:39.089 "rw_mbytes_per_sec": 0, 00:38:39.089 "r_mbytes_per_sec": 0, 00:38:39.089 "w_mbytes_per_sec": 0 00:38:39.089 }, 00:38:39.089 "claimed": false, 00:38:39.089 "zoned": false, 00:38:39.089 "supported_io_types": { 00:38:39.089 "read": true, 00:38:39.089 "write": true, 00:38:39.089 "unmap": true, 00:38:39.089 "flush": false, 00:38:39.089 "reset": true, 00:38:39.089 "nvme_admin": false, 00:38:39.089 "nvme_io": false, 00:38:39.089 "nvme_io_md": false, 00:38:39.089 "write_zeroes": true, 00:38:39.089 "zcopy": false, 00:38:39.089 "get_zone_info": false, 00:38:39.089 "zone_management": false, 00:38:39.089 "zone_append": false, 00:38:39.089 "compare": false, 00:38:39.089 "compare_and_write": false, 00:38:39.089 "abort": false, 00:38:39.089 "seek_hole": true, 00:38:39.089 "seek_data": true, 00:38:39.089 "copy": false, 00:38:39.089 "nvme_iov_md": false 00:38:39.089 }, 00:38:39.089 "driver_specific": { 00:38:39.089 "lvol": { 00:38:39.089 "lvol_store_uuid": "cfc42aa3-e4c5-4235-b780-36df91256b5d", 00:38:39.089 "base_bdev": "aio_bdev", 00:38:39.089 "thin_provision": false, 00:38:39.089 "num_allocated_clusters": 38, 00:38:39.089 "snapshot": false, 00:38:39.089 "clone": false, 00:38:39.089 "esnap_clone": false 00:38:39.089 } 00:38:39.089 } 00:38:39.089 } 00:38:39.089 ] 00:38:39.089 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:39.089 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:39.089 06:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:39.347 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:39.347 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:39.347 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:39.606 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:39.606 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16826342-e22e-46dd-b62c-21ab723150c3 00:38:39.606 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cfc42aa3-e4c5-4235-b780-36df91256b5d 00:38:39.865 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:40.123 00:38:40.123 real 0m16.826s 00:38:40.123 user 0m34.018s 00:38:40.123 sys 0m3.973s 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:40.123 ************************************ 00:38:40.123 END TEST lvs_grow_dirty 00:38:40.123 ************************************ 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:40.123 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:40.124 nvmf_trace.0 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:40.124 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:40.124 rmmod nvme_tcp 00:38:40.124 rmmod nvme_fabrics 00:38:40.124 rmmod nvme_keyring 00:38:40.383 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:40.383 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:40.383 06:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3617538 ']' 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3617538 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3617538 ']' 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3617538 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3617538 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3617538' 00:38:40.383 killing process with pid 3617538 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3617538 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3617538 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:40.383 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.641 06:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:42.547 00:38:42.547 real 0m40.923s 00:38:42.547 user 0m51.303s 00:38:42.547 sys 0m9.865s 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:42.547 ************************************ 00:38:42.547 END TEST nvmf_lvs_grow 00:38:42.547 ************************************ 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:42.547 ************************************ 00:38:42.547 START TEST nvmf_bdev_io_wait 00:38:42.547 ************************************ 00:38:42.547 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:42.807 * Looking for test storage... 00:38:42.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:42.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.807 --rc genhtml_branch_coverage=1 00:38:42.807 --rc genhtml_function_coverage=1 00:38:42.807 --rc genhtml_legend=1 00:38:42.807 --rc geninfo_all_blocks=1 00:38:42.807 --rc geninfo_unexecuted_blocks=1 00:38:42.807 00:38:42.807 ' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:42.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.807 --rc genhtml_branch_coverage=1 00:38:42.807 --rc genhtml_function_coverage=1 00:38:42.807 --rc genhtml_legend=1 00:38:42.807 --rc geninfo_all_blocks=1 00:38:42.807 --rc geninfo_unexecuted_blocks=1 00:38:42.807 00:38:42.807 ' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:42.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.807 --rc genhtml_branch_coverage=1 00:38:42.807 --rc genhtml_function_coverage=1 00:38:42.807 --rc genhtml_legend=1 00:38:42.807 --rc geninfo_all_blocks=1 00:38:42.807 --rc geninfo_unexecuted_blocks=1 00:38:42.807 00:38:42.807 ' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:42.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.807 --rc genhtml_branch_coverage=1 00:38:42.807 --rc genhtml_function_coverage=1 00:38:42.807 --rc genhtml_legend=1 00:38:42.807 --rc geninfo_all_blocks=1 00:38:42.807 --rc geninfo_unexecuted_blocks=1 00:38:42.807 00:38:42.807 ' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.807 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:42.808 06:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:48.212 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:48.212 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:48.212 Found net devices under 0000:af:00.0: cvl_0_0 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:48.212 Found net devices under 0000:af:00.1: cvl_0_1 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.212 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:38:48.213 00:38:48.213 --- 10.0.0.2 ping statistics --- 00:38:48.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.213 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:38:48.213 00:38:48.213 --- 10.0.0.1 ping statistics --- 00:38:48.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.213 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3621522 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3621522 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3621522 ']' 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:48.213 06:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.213 [2024-12-16 06:07:22.018460] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:48.213 [2024-12-16 06:07:22.019308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:48.213 [2024-12-16 06:07:22.019338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.473 [2024-12-16 06:07:22.077267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:48.473 [2024-12-16 06:07:22.118947] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:48.473 [2024-12-16 06:07:22.118986] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:48.473 [2024-12-16 06:07:22.118993] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:48.473 [2024-12-16 06:07:22.119000] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:48.473 [2024-12-16 06:07:22.119005] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:48.473 [2024-12-16 06:07:22.119128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.473 [2024-12-16 06:07:22.119241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:48.473 [2024-12-16 06:07:22.119327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:48.473 [2024-12-16 06:07:22.119328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.473 [2024-12-16 06:07:22.119618] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 [2024-12-16 06:07:22.253951] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:48.473 [2024-12-16 06:07:22.254123] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:48.473 [2024-12-16 06:07:22.254510] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:48.473 [2024-12-16 06:07:22.254930] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 [2024-12-16 06:07:22.260001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 Malloc0 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.473 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 [2024-12-16 06:07:22.323924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3621550 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3621552 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:48.732 { 00:38:48.732 "params": { 00:38:48.732 "name": "Nvme$subsystem", 00:38:48.732 "trtype": "$TEST_TRANSPORT", 00:38:48.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.732 "adrfam": "ipv4", 00:38:48.732 "trsvcid": "$NVMF_PORT", 00:38:48.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.732 "hdgst": ${hdgst:-false}, 00:38:48.732 "ddgst": ${ddgst:-false} 00:38:48.732 }, 00:38:48.732 "method": "bdev_nvme_attach_controller" 00:38:48.732 } 00:38:48.732 EOF 00:38:48.732 )") 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3621554 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:48.732 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:48.732 { 00:38:48.732 "params": { 00:38:48.732 "name": "Nvme$subsystem", 00:38:48.732 "trtype": "$TEST_TRANSPORT", 00:38:48.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.732 "adrfam": "ipv4", 00:38:48.733 "trsvcid": "$NVMF_PORT", 00:38:48.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.733 "hdgst": ${hdgst:-false}, 00:38:48.733 "ddgst": ${ddgst:-false} 00:38:48.733 }, 00:38:48.733 "method": "bdev_nvme_attach_controller" 00:38:48.733 } 00:38:48.733 EOF 00:38:48.733 )") 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3621557 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:48.733 { 00:38:48.733 "params": { 00:38:48.733 "name": "Nvme$subsystem", 00:38:48.733 "trtype": "$TEST_TRANSPORT", 00:38:48.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.733 "adrfam": "ipv4", 00:38:48.733 "trsvcid": "$NVMF_PORT", 00:38:48.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.733 "hdgst": ${hdgst:-false}, 00:38:48.733 "ddgst": ${ddgst:-false} 00:38:48.733 }, 00:38:48.733 "method": "bdev_nvme_attach_controller" 00:38:48.733 } 00:38:48.733 EOF 00:38:48.733 )") 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:48.733 { 00:38:48.733 "params": { 00:38:48.733 "name": "Nvme$subsystem", 00:38:48.733 "trtype": "$TEST_TRANSPORT", 00:38:48.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:48.733 "adrfam": "ipv4", 00:38:48.733 "trsvcid": "$NVMF_PORT", 00:38:48.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:48.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:48.733 "hdgst": ${hdgst:-false}, 00:38:48.733 "ddgst": ${ddgst:-false} 00:38:48.733 }, 00:38:48.733 "method": "bdev_nvme_attach_controller" 00:38:48.733 } 00:38:48.733 EOF 00:38:48.733 )") 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3621550 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:48.733 "params": { 00:38:48.733 "name": "Nvme1", 00:38:48.733 "trtype": "tcp", 00:38:48.733 "traddr": "10.0.0.2", 00:38:48.733 "adrfam": "ipv4", 00:38:48.733 "trsvcid": "4420", 00:38:48.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.733 "hdgst": false, 00:38:48.733 "ddgst": false 00:38:48.733 }, 00:38:48.733 "method": "bdev_nvme_attach_controller" 00:38:48.733 }' 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:48.733 "params": { 00:38:48.733 "name": "Nvme1", 00:38:48.733 "trtype": "tcp", 00:38:48.733 "traddr": "10.0.0.2", 00:38:48.733 "adrfam": "ipv4", 00:38:48.733 "trsvcid": "4420", 00:38:48.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.733 "hdgst": false, 00:38:48.733 "ddgst": false 00:38:48.733 }, 00:38:48.733 "method": "bdev_nvme_attach_controller" 00:38:48.733 }' 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:48.733 "params": { 00:38:48.733 "name": "Nvme1", 00:38:48.733 "trtype": "tcp", 00:38:48.733 "traddr": "10.0.0.2", 00:38:48.733 "adrfam": "ipv4", 00:38:48.733 "trsvcid": "4420", 00:38:48.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.733 "hdgst": false, 00:38:48.733 "ddgst": false 00:38:48.733 }, 00:38:48.733 "method": "bdev_nvme_attach_controller" 00:38:48.733 }' 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:48.733 06:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:48.733 "params": { 00:38:48.733 "name": "Nvme1", 00:38:48.733 "trtype": "tcp", 00:38:48.733 "traddr": "10.0.0.2", 00:38:48.733 "adrfam": "ipv4", 00:38:48.733 "trsvcid": "4420", 00:38:48.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.733 "hdgst": false, 00:38:48.733 "ddgst": false 00:38:48.733 }, 00:38:48.733 "method": "bdev_nvme_attach_controller" 00:38:48.733 }' 00:38:48.733 [2024-12-16 06:07:22.374250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:48.733 [2024-12-16 06:07:22.374298] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:48.733 [2024-12-16 06:07:22.376148] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:48.733 [2024-12-16 06:07:22.376196] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:48.733 [2024-12-16 06:07:22.376222] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:48.733 [2024-12-16 06:07:22.376261] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:48.733 [2024-12-16 06:07:22.379689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:48.733 [2024-12-16 06:07:22.379730] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:48.733 [2024-12-16 06:07:22.551608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.733 [2024-12-16 06:07:22.581705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:48.992 [2024-12-16 06:07:22.645767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.992 [2024-12-16 06:07:22.678114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:48.992 [2024-12-16 06:07:22.701019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.992 [2024-12-16 06:07:22.728655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:48.992 [2024-12-16 06:07:22.741758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.992 [2024-12-16 06:07:22.768511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:49.251 Running I/O for 1 seconds... 00:38:49.509 Running I/O for 1 seconds... 00:38:49.509 Running I/O for 1 seconds... 00:38:49.509 Running I/O for 1 seconds... 00:38:50.444 10054.00 IOPS, 39.27 MiB/s 00:38:50.444 Latency(us) 00:38:50.444 [2024-12-16T05:07:24.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.444 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:50.444 Nvme1n1 : 1.02 10011.51 39.11 0.00 0.00 12695.00 3994.58 22094.99 00:38:50.444 [2024-12-16T05:07:24.300Z] =================================================================================================================== 00:38:50.444 [2024-12-16T05:07:24.300Z] Total : 10011.51 39.11 0.00 0.00 12695.00 3994.58 22094.99 00:38:50.444 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3621552 00:38:50.444 254272.00 IOPS, 993.25 MiB/s 00:38:50.444 Latency(us) 00:38:50.444 [2024-12-16T05:07:24.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.444 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:50.444 Nvme1n1 : 1.00 253884.96 991.74 0.00 0.00 501.32 229.18 1497.97 00:38:50.444 [2024-12-16T05:07:24.300Z] =================================================================================================================== 00:38:50.444 [2024-12-16T05:07:24.300Z] Total : 253884.96 991.74 0.00 0.00 501.32 229.18 1497.97 00:38:50.444 10769.00 IOPS, 42.07 MiB/s 00:38:50.444 Latency(us) 00:38:50.444 [2024-12-16T05:07:24.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.444 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:50.444 Nvme1n1 : 1.01 10817.85 42.26 0.00 0.00 11783.12 4556.31 16227.96 00:38:50.444 [2024-12-16T05:07:24.300Z] =================================================================================================================== 00:38:50.444 [2024-12-16T05:07:24.300Z] Total : 10817.85 42.26 0.00 0.00 11783.12 4556.31 16227.96 00:38:50.444 8971.00 IOPS, 35.04 MiB/s 00:38:50.444 Latency(us) 00:38:50.444 [2024-12-16T05:07:24.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.444 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:50.444 Nvme1n1 : 1.01 9102.44 35.56 0.00 0.00 14034.89 2855.50 31207.62 00:38:50.444 [2024-12-16T05:07:24.300Z] =================================================================================================================== 00:38:50.444 [2024-12-16T05:07:24.300Z] Total : 9102.44 35.56 0.00 0.00 14034.89 2855.50 31207.62 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3621554 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3621557 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.703 rmmod nvme_tcp 00:38:50.703 rmmod nvme_fabrics 00:38:50.703 rmmod nvme_keyring 00:38:50.703 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3621522 ']' 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3621522 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3621522 ']' 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3621522 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3621522 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3621522' 00:38:50.962 killing process with pid 3621522 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3621522 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3621522 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.962 06:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:53.496 00:38:53.496 real 0m10.476s 00:38:53.496 user 0m16.072s 00:38:53.496 sys 0m6.318s 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:53.496 ************************************ 00:38:53.496 END TEST nvmf_bdev_io_wait 00:38:53.496 ************************************ 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:53.496 ************************************ 00:38:53.496 START TEST nvmf_queue_depth 00:38:53.496 ************************************ 00:38:53.496 06:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:53.496 * Looking for test storage... 00:38:53.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:53.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.496 --rc genhtml_branch_coverage=1 00:38:53.496 --rc genhtml_function_coverage=1 00:38:53.496 --rc genhtml_legend=1 00:38:53.496 --rc geninfo_all_blocks=1 00:38:53.496 --rc geninfo_unexecuted_blocks=1 00:38:53.496 00:38:53.496 ' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:53.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.496 --rc genhtml_branch_coverage=1 00:38:53.496 --rc genhtml_function_coverage=1 00:38:53.496 --rc genhtml_legend=1 00:38:53.496 --rc geninfo_all_blocks=1 00:38:53.496 --rc geninfo_unexecuted_blocks=1 00:38:53.496 00:38:53.496 ' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:53.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.496 --rc genhtml_branch_coverage=1 00:38:53.496 --rc genhtml_function_coverage=1 00:38:53.496 --rc genhtml_legend=1 00:38:53.496 --rc geninfo_all_blocks=1 00:38:53.496 --rc geninfo_unexecuted_blocks=1 00:38:53.496 00:38:53.496 ' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:53.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.496 --rc genhtml_branch_coverage=1 00:38:53.496 --rc genhtml_function_coverage=1 00:38:53.496 --rc genhtml_legend=1 00:38:53.496 --rc geninfo_all_blocks=1 00:38:53.496 --rc geninfo_unexecuted_blocks=1 00:38:53.496 00:38:53.496 ' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.496 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:53.497 06:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:58.769 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:58.769 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:58.769 Found net devices under 0000:af:00.0: cvl_0_0 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:58.769 Found net devices under 0000:af:00.1: cvl_0_1 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:58.769 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:58.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:58.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:38:58.770 00:38:58.770 --- 10.0.0.2 ping statistics --- 00:38:58.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.770 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:58.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:58.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:38:58.770 00:38:58.770 --- 10.0.0.1 ping statistics --- 00:38:58.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.770 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3625260 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3625260 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3625260 ']' 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:58.770 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.030 [2024-12-16 06:07:32.634692] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:59.030 [2024-12-16 06:07:32.635617] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:59.030 [2024-12-16 06:07:32.635650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.030 [2024-12-16 06:07:32.697599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.030 [2024-12-16 06:07:32.735587] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.030 [2024-12-16 06:07:32.735625] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.030 [2024-12-16 06:07:32.735633] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.030 [2024-12-16 06:07:32.735639] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.030 [2024-12-16 06:07:32.735644] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.030 [2024-12-16 06:07:32.735668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.030 [2024-12-16 06:07:32.795360] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:59.030 [2024-12-16 06:07:32.795588] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.030 [2024-12-16 06:07:32.860288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.030 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.289 Malloc0 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.289 [2024-12-16 06:07:32.916212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3625360 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3625360 /var/tmp/bdevperf.sock 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3625360 ']' 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:59.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:59.289 06:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.289 [2024-12-16 06:07:32.966176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:59.289 [2024-12-16 06:07:32.966215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625360 ] 00:38:59.289 [2024-12-16 06:07:33.021566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.289 [2024-12-16 06:07:33.062258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.548 06:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:59.548 06:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:59.548 06:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:59.548 06:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.548 06:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:59.548 NVMe0n1 00:38:59.548 06:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.548 06:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:59.807 Running I/O for 10 seconds... 00:39:01.679 12288.00 IOPS, 48.00 MiB/s [2024-12-16T05:07:36.472Z] 12499.50 IOPS, 48.83 MiB/s [2024-12-16T05:07:37.850Z] 12629.33 IOPS, 49.33 MiB/s [2024-12-16T05:07:38.787Z] 12609.00 IOPS, 49.25 MiB/s [2024-12-16T05:07:39.723Z] 12692.60 IOPS, 49.58 MiB/s [2024-12-16T05:07:40.660Z] 12708.33 IOPS, 49.64 MiB/s [2024-12-16T05:07:41.596Z] 12730.86 IOPS, 49.73 MiB/s [2024-12-16T05:07:42.532Z] 12744.25 IOPS, 49.78 MiB/s [2024-12-16T05:07:43.910Z] 12759.11 IOPS, 49.84 MiB/s [2024-12-16T05:07:43.910Z] 12787.30 IOPS, 49.95 MiB/s 00:39:10.054 Latency(us) 00:39:10.054 [2024-12-16T05:07:43.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.054 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:10.054 Verification LBA range: start 0x0 length 0x4000 00:39:10.054 NVMe0n1 : 10.06 12802.40 50.01 0.00 0.00 79702.82 18599.74 51430.16 00:39:10.054 [2024-12-16T05:07:43.910Z] =================================================================================================================== 00:39:10.054 [2024-12-16T05:07:43.910Z] Total : 12802.40 50.01 0.00 0.00 79702.82 18599.74 51430.16 00:39:10.054 { 00:39:10.054 "results": [ 00:39:10.054 { 00:39:10.054 "job": "NVMe0n1", 00:39:10.054 "core_mask": "0x1", 00:39:10.054 "workload": "verify", 00:39:10.054 "status": "finished", 00:39:10.054 "verify_range": { 00:39:10.054 "start": 0, 00:39:10.054 "length": 16384 00:39:10.054 }, 00:39:10.054 "queue_depth": 1024, 00:39:10.054 "io_size": 4096, 00:39:10.054 "runtime": 10.062018, 00:39:10.054 "iops": 12802.402062886391, 00:39:10.054 "mibps": 50.009383058149965, 00:39:10.054 "io_failed": 0, 00:39:10.054 "io_timeout": 0, 00:39:10.054 "avg_latency_us": 79702.82049988577, 00:39:10.054 "min_latency_us": 18599.74095238095, 00:39:10.054 "max_latency_us": 51430.15619047619 00:39:10.054 } 00:39:10.054 ], 00:39:10.054 "core_count": 1 00:39:10.054 } 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3625360 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3625360 ']' 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3625360 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3625360 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3625360' 00:39:10.054 killing process with pid 3625360 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3625360 00:39:10.054 Received shutdown signal, test time was about 10.000000 seconds 00:39:10.054 00:39:10.054 Latency(us) 00:39:10.054 [2024-12-16T05:07:43.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.054 [2024-12-16T05:07:43.910Z] =================================================================================================================== 00:39:10.054 [2024-12-16T05:07:43.910Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3625360 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:10.054 rmmod nvme_tcp 00:39:10.054 rmmod nvme_fabrics 00:39:10.054 rmmod nvme_keyring 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3625260 ']' 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3625260 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3625260 ']' 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3625260 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:10.054 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3625260 00:39:10.313 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:10.313 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:10.314 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3625260' 00:39:10.314 killing process with pid 3625260 00:39:10.314 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3625260 00:39:10.314 06:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3625260 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.314 06:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:12.849 00:39:12.849 real 0m19.252s 00:39:12.849 user 0m22.763s 00:39:12.849 sys 0m5.879s 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:12.849 ************************************ 00:39:12.849 END TEST nvmf_queue_depth 00:39:12.849 ************************************ 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:12.849 ************************************ 00:39:12.849 START TEST nvmf_target_multipath 00:39:12.849 ************************************ 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:12.849 * Looking for test storage... 00:39:12.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.849 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.850 --rc genhtml_branch_coverage=1 00:39:12.850 --rc genhtml_function_coverage=1 00:39:12.850 --rc genhtml_legend=1 00:39:12.850 --rc geninfo_all_blocks=1 00:39:12.850 --rc geninfo_unexecuted_blocks=1 00:39:12.850 00:39:12.850 ' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.850 --rc genhtml_branch_coverage=1 00:39:12.850 --rc genhtml_function_coverage=1 00:39:12.850 --rc genhtml_legend=1 00:39:12.850 --rc geninfo_all_blocks=1 00:39:12.850 --rc geninfo_unexecuted_blocks=1 00:39:12.850 00:39:12.850 ' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.850 --rc genhtml_branch_coverage=1 00:39:12.850 --rc genhtml_function_coverage=1 00:39:12.850 --rc genhtml_legend=1 00:39:12.850 --rc geninfo_all_blocks=1 00:39:12.850 --rc geninfo_unexecuted_blocks=1 00:39:12.850 00:39:12.850 ' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.850 --rc genhtml_branch_coverage=1 00:39:12.850 --rc genhtml_function_coverage=1 00:39:12.850 --rc genhtml_legend=1 00:39:12.850 --rc geninfo_all_blocks=1 00:39:12.850 --rc geninfo_unexecuted_blocks=1 00:39:12.850 00:39:12.850 ' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:12.850 06:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:18.120 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:18.121 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:18.121 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:18.121 Found net devices under 0000:af:00.0: cvl_0_0 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:18.121 Found net devices under 0000:af:00.1: cvl_0_1 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:18.121 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:18.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:18.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:39:18.122 00:39:18.122 --- 10.0.0.2 ping statistics --- 00:39:18.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.122 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:18.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:18.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:39:18.122 00:39:18.122 --- 10.0.0.1 ping statistics --- 00:39:18.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.122 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:18.122 only one NIC for nvmf test 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:18.122 rmmod nvme_tcp 00:39:18.122 rmmod nvme_fabrics 00:39:18.122 rmmod nvme_keyring 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.122 06:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.026 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:20.026 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:20.026 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:20.026 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:20.026 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:20.026 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:20.027 00:39:20.027 real 0m7.553s 00:39:20.027 user 0m1.519s 00:39:20.027 sys 0m4.004s 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:20.027 ************************************ 00:39:20.027 END TEST nvmf_target_multipath 00:39:20.027 ************************************ 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:20.027 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:20.287 ************************************ 00:39:20.287 START TEST nvmf_zcopy 00:39:20.287 ************************************ 00:39:20.287 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:20.287 * Looking for test storage... 00:39:20.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:20.287 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:20.287 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:39:20.287 06:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:20.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.287 --rc genhtml_branch_coverage=1 00:39:20.287 --rc genhtml_function_coverage=1 00:39:20.287 --rc genhtml_legend=1 00:39:20.287 --rc geninfo_all_blocks=1 00:39:20.287 --rc geninfo_unexecuted_blocks=1 00:39:20.287 00:39:20.287 ' 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:20.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.287 --rc genhtml_branch_coverage=1 00:39:20.287 --rc genhtml_function_coverage=1 00:39:20.287 --rc genhtml_legend=1 00:39:20.287 --rc geninfo_all_blocks=1 00:39:20.287 --rc geninfo_unexecuted_blocks=1 00:39:20.287 00:39:20.287 ' 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:20.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.287 --rc genhtml_branch_coverage=1 00:39:20.287 --rc genhtml_function_coverage=1 00:39:20.287 --rc genhtml_legend=1 00:39:20.287 --rc geninfo_all_blocks=1 00:39:20.287 --rc geninfo_unexecuted_blocks=1 00:39:20.287 00:39:20.287 ' 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:20.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.287 --rc genhtml_branch_coverage=1 00:39:20.287 --rc genhtml_function_coverage=1 00:39:20.287 --rc genhtml_legend=1 00:39:20.287 --rc geninfo_all_blocks=1 00:39:20.287 --rc geninfo_unexecuted_blocks=1 00:39:20.287 00:39:20.287 ' 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:20.287 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:20.288 06:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:25.562 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:25.562 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:25.562 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:25.562 Found net devices under 0000:af:00.0: cvl_0_0 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:25.563 Found net devices under 0000:af:00.1: cvl_0_1 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:25.563 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:25.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:25.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:39:25.822 00:39:25.822 --- 10.0.0.2 ping statistics --- 00:39:25.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.822 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:25.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:25.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:39:25.822 00:39:25.822 --- 10.0.0.1 ping statistics --- 00:39:25.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.822 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3633771 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3633771 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3633771 ']' 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:25.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:25.822 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.822 [2024-12-16 06:07:59.643582] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:25.822 [2024-12-16 06:07:59.644476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:25.822 [2024-12-16 06:07:59.644508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:26.081 [2024-12-16 06:07:59.703986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.081 [2024-12-16 06:07:59.742702] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:26.081 [2024-12-16 06:07:59.742738] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:26.081 [2024-12-16 06:07:59.742745] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:26.081 [2024-12-16 06:07:59.742752] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:26.081 [2024-12-16 06:07:59.742757] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:26.081 [2024-12-16 06:07:59.742778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.081 [2024-12-16 06:07:59.803054] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:26.081 [2024-12-16 06:07:59.803276] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.081 [2024-12-16 06:07:59.871403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.081 [2024-12-16 06:07:59.887619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.081 malloc0 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:26.081 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:26.340 { 00:39:26.340 "params": { 00:39:26.340 "name": "Nvme$subsystem", 00:39:26.340 "trtype": "$TEST_TRANSPORT", 00:39:26.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:26.340 "adrfam": "ipv4", 00:39:26.340 "trsvcid": "$NVMF_PORT", 00:39:26.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:26.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:26.340 "hdgst": ${hdgst:-false}, 00:39:26.340 "ddgst": ${ddgst:-false} 00:39:26.340 }, 00:39:26.340 "method": "bdev_nvme_attach_controller" 00:39:26.340 } 00:39:26.340 EOF 00:39:26.340 )") 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:26.340 06:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:26.340 "params": { 00:39:26.340 "name": "Nvme1", 00:39:26.340 "trtype": "tcp", 00:39:26.340 "traddr": "10.0.0.2", 00:39:26.340 "adrfam": "ipv4", 00:39:26.340 "trsvcid": "4420", 00:39:26.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:26.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:26.340 "hdgst": false, 00:39:26.340 "ddgst": false 00:39:26.340 }, 00:39:26.340 "method": "bdev_nvme_attach_controller" 00:39:26.340 }' 00:39:26.340 [2024-12-16 06:07:59.983984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:26.340 [2024-12-16 06:07:59.984028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633890 ] 00:39:26.340 [2024-12-16 06:08:00.041305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.340 [2024-12-16 06:08:00.083880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.599 Running I/O for 10 seconds... 00:39:28.911 8285.00 IOPS, 64.73 MiB/s [2024-12-16T05:08:03.703Z] 8319.00 IOPS, 64.99 MiB/s [2024-12-16T05:08:04.639Z] 8364.33 IOPS, 65.35 MiB/s [2024-12-16T05:08:05.574Z] 8377.50 IOPS, 65.45 MiB/s [2024-12-16T05:08:06.509Z] 8382.40 IOPS, 65.49 MiB/s [2024-12-16T05:08:07.445Z] 8387.00 IOPS, 65.52 MiB/s [2024-12-16T05:08:08.382Z] 8399.57 IOPS, 65.62 MiB/s [2024-12-16T05:08:09.760Z] 8404.62 IOPS, 65.66 MiB/s [2024-12-16T05:08:10.697Z] 8408.89 IOPS, 65.69 MiB/s [2024-12-16T05:08:10.697Z] 8402.90 IOPS, 65.65 MiB/s 00:39:36.841 Latency(us) 00:39:36.841 [2024-12-16T05:08:10.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:36.841 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:36.841 Verification LBA range: start 0x0 length 0x1000 00:39:36.841 Nvme1n1 : 10.01 8405.99 65.67 0.00 0.00 15184.73 2324.97 21720.50 00:39:36.841 [2024-12-16T05:08:10.697Z] =================================================================================================================== 00:39:36.841 [2024-12-16T05:08:10.697Z] Total : 8405.99 65.67 0.00 0.00 15184.73 2324.97 21720.50 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3635565 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:36.841 { 00:39:36.841 "params": { 00:39:36.841 "name": "Nvme$subsystem", 00:39:36.841 "trtype": "$TEST_TRANSPORT", 00:39:36.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:36.841 "adrfam": "ipv4", 00:39:36.841 "trsvcid": "$NVMF_PORT", 00:39:36.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:36.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:36.841 "hdgst": ${hdgst:-false}, 00:39:36.841 "ddgst": ${ddgst:-false} 00:39:36.841 }, 00:39:36.841 "method": "bdev_nvme_attach_controller" 00:39:36.841 } 00:39:36.841 EOF 00:39:36.841 )") 00:39:36.841 [2024-12-16 06:08:10.567103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.841 [2024-12-16 06:08:10.567136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:36.841 [2024-12-16 06:08:10.575071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.841 [2024-12-16 06:08:10.575085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.841 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:36.841 "params": { 00:39:36.841 "name": "Nvme1", 00:39:36.841 "trtype": "tcp", 00:39:36.841 "traddr": "10.0.0.2", 00:39:36.841 "adrfam": "ipv4", 00:39:36.841 "trsvcid": "4420", 00:39:36.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:36.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:36.841 "hdgst": false, 00:39:36.841 "ddgst": false 00:39:36.841 }, 00:39:36.841 "method": "bdev_nvme_attach_controller" 00:39:36.841 }' 00:39:36.842 [2024-12-16 06:08:10.583066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.583078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.591064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.591074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.599064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.599074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.606310] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:36.842 [2024-12-16 06:08:10.606352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635565 ] 00:39:36.842 [2024-12-16 06:08:10.607067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.607077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.615065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.615075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.623065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.623076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.631066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.631075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.639068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.639080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.647066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.647077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.655066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.655077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.661039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.842 [2024-12-16 06:08:10.663065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.663075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.671068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.671080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.679065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.679076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.687069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.687090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.842 [2024-12-16 06:08:10.695066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.842 [2024-12-16 06:08:10.695077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.700875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.101 [2024-12-16 06:08:10.703066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.703078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.711071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.711084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.719074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.719092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.727071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.727084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.735068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.735080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.743067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.743079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.751065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.751075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.759068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.759079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.767067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.767077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.775066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.775076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.783111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.783133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.791070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.791084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.799111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.799128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.807069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.807083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.101 [2024-12-16 06:08:10.815066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.101 [2024-12-16 06:08:10.815076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.823076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.823088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.831074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.831084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.839067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.839078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.847068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.847080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.855069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.855082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.863071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.863085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.871068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.871080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.915124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.915142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.923069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.923082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 Running I/O for 5 seconds... 00:39:37.102 [2024-12-16 06:08:10.936908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.936928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.947109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.947128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.102 [2024-12-16 06:08:10.954100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.102 [2024-12-16 06:08:10.954119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:10.962483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:10.962502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:10.974316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:10.974342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:10.988802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:10.988822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:10.997688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:10.997707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.012679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.012698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.023320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.023339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.036099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.036119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.047574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.047594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.059711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.059731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.070288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.070307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.084726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.084745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.091656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.091675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.101977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.101996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.115694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.115713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.126306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.126326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.140981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.141000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.150132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.150151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.361 [2024-12-16 06:08:11.164017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.361 [2024-12-16 06:08:11.164037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.362 [2024-12-16 06:08:11.174072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.362 [2024-12-16 06:08:11.174091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.362 [2024-12-16 06:08:11.188388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.362 [2024-12-16 06:08:11.188407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.362 [2024-12-16 06:08:11.199063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.362 [2024-12-16 06:08:11.199086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.362 [2024-12-16 06:08:11.212279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.362 [2024-12-16 06:08:11.212299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.221239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.221259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.227921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.227941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.239007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.239034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.251278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.251297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.263534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.263553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.276182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.276201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.287220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.287239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.299406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.299425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.312872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.312895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.320617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.320637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.328482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.328501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.339088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.339108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.350776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.350796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.364001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.364021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.373435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.373455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.380379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.380399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.391700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.391720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.403350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.403374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.415764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.415784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.426835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.426863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.440498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.440520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.449640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.449660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.456273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.456292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.621 [2024-12-16 06:08:11.466442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.621 [2024-12-16 06:08:11.466466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.481039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.481058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.488999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.489018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.498015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.498034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.512151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.512170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.523357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.523376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.536470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.536490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.545355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.545375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.552270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.552290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.562661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.562684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.574708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.574730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.589506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.589527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.597683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.597703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.611186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.611211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.618442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.618461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.626589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.626608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.637731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.637751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.652293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.652312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.663502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.663521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.676159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.676177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.687144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.687163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.693855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.693874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.703256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.703275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.710826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.710845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.722336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.722355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.887 [2024-12-16 06:08:11.736352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.887 [2024-12-16 06:08:11.736370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.745349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.745368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.752119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.752137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.762094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.762113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.776667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.776686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.787037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.787056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.793726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.793745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.806796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.806816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.820349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.820368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.829362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.829381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.835914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.835933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.847163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.847182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.859442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.859461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.872582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.872601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.881517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.881535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.887990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.888008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.145 [2024-12-16 06:08:11.898228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.145 [2024-12-16 06:08:11.898248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 [2024-12-16 06:08:11.913054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.913073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 [2024-12-16 06:08:11.922536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.922554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 16487.00 IOPS, 128.80 MiB/s [2024-12-16T05:08:12.002Z] [2024-12-16 06:08:11.935420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.935439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 [2024-12-16 06:08:11.947605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.947625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 [2024-12-16 06:08:11.960203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.960222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 [2024-12-16 06:08:11.971009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.971029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 [2024-12-16 06:08:11.984115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.984135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.146 [2024-12-16 06:08:11.994035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.146 [2024-12-16 06:08:11.994055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.008168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.008186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.017661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.017682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.032970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.032990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.042230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.042250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.056329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.056349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.066640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.066658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.080285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.080305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.091224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.091242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.104549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.104568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.114448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.114467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.128594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.128612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.138682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.138701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.152746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.152764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.161779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.161798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.176688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.176708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.187050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.187069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.194311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.194330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.208042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.208061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.218073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.218091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.231959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.231977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.242196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.242215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.255634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.255653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.443 [2024-12-16 06:08:12.267270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.443 [2024-12-16 06:08:12.267290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.274496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.274515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.282865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.282884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.293952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.293971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.308615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.308639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.317967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.317988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.332092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.332111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.343151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.343171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.350861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.350881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.362745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.362764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.375977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.375997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.386885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.386905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.803 [2024-12-16 06:08:12.400629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.803 [2024-12-16 06:08:12.400648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.409727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.409745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.423900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.423918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.434980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.434999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.448292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.448316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.459433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.459452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.472607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.472627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.482093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.482112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.496603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.496623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.506208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.506227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.520431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.520450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.529416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.529435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.535635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.535653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.546824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.546844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.559131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.559151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.572075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.572095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.583201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.583221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.589898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.589918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.598320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.598341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.612818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.612838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.804 [2024-12-16 06:08:12.621905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.804 [2024-12-16 06:08:12.621925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.636665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.636687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.647171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.647192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.660540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.660565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.669754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.669775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.684837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.684865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.693790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.693810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.708670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.708690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.718415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.718435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.732627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.732648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.742936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.742955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.756302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.756322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.767493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.767512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.779793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.779813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.791163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.791183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.797608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.797627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.809900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.809920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.824752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.824772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.834162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.834182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.848430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.848450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.857660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.857679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.864556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.864577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.874493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.874517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.888386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.888405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.897079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.897098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.903966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.903984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.064 [2024-12-16 06:08:12.915255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.064 [2024-12-16 06:08:12.915275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.323 [2024-12-16 06:08:12.922796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.323 [2024-12-16 06:08:12.922816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.323 [2024-12-16 06:08:12.934124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.323 [2024-12-16 06:08:12.934144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.323 16562.50 IOPS, 129.39 MiB/s [2024-12-16T05:08:13.179Z] [2024-12-16 06:08:12.947650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:12.947668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:12.958936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:12.958957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:12.973155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:12.973175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:12.982287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:12.982306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:12.996116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:12.996147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.005401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.005420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.011866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.011886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.022280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.022299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.035038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.035058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.041985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.042005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.050101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.050121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.064200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.064219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.075422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.075440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.087254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.087273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.093708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.093727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.103794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.103812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.114928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.114947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.128695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.128715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.137810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.137828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.152351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.152370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.161482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.161500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.167902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.167920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.324 [2024-12-16 06:08:13.178322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.324 [2024-12-16 06:08:13.178341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.191426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.191445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.202763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.202783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.216365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.216384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.225428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.225448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.232170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.232189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.243053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.243072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.254096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.254115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.268821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.268841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.278076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.278095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.292316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.292335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.302860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.302879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.316403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.316423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.325622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.325641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.340445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.340464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.351892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.351911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.363056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.363075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.370256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.370274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.378675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.378693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.389688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.389708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.404834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.404861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.415183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.415203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.422172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.422192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.583 [2024-12-16 06:08:13.430591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.583 [2024-12-16 06:08:13.430609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.842 [2024-12-16 06:08:13.442401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.842 [2024-12-16 06:08:13.442420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.842 [2024-12-16 06:08:13.456747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.842 [2024-12-16 06:08:13.456767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.466609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.466628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.480096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.480115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.489096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.489115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.496037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.496055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.506035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.506054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.520310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.520329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.531896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.531915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.542351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.542370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.557065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.557087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.565375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.565394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.572860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.572880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.581298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.581317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.589071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.589090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.598881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.598903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.612439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.612459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.623133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.623153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.630192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.630212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.638502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.638522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.650474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.650493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.665034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.665053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.674271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.674294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.688619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.688638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.843 [2024-12-16 06:08:13.697676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.843 [2024-12-16 06:08:13.697695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.712006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.712026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.721746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.721765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.736599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.736618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.747425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.747443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.758788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.758807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.772958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.772977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.780811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.780829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.795338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.795356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.807004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.807023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.819674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.819692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.830247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.830266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.844610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.844629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.853754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.853773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.868285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.868305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.878524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.878543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.892911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.892930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.901822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.901845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.916763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.916782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.927325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.927344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.934474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.934492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 16487.33 IOPS, 128.81 MiB/s [2024-12-16T05:08:13.958Z] [2024-12-16 06:08:13.947084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.947115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.102 [2024-12-16 06:08:13.954055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.102 [2024-12-16 06:08:13.954074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.361 [2024-12-16 06:08:13.962647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.361 [2024-12-16 06:08:13.962666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.361 [2024-12-16 06:08:13.974142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:13.974161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:13.988359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:13.988378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:13.998214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:13.998233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.012704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.012723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.022014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.022033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.036861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.036880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.046143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.046163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.060518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.060539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.071812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.071832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.082153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.082173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.096070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.096091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.106971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.106990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.120922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.120948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.128414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.128434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.137714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.137734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.151650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.151669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.162922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.162942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.176665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.176685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.185664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.185685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.200772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.200791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.362 [2024-12-16 06:08:14.210061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.362 [2024-12-16 06:08:14.210080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.225145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.225166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.239692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.239712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.251061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.251080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.264690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.264710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.275379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.275398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.287986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.288005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.298707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.298728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.312664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.312684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.321734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.321754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.336473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.336493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.345545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.345564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.352263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.352283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.362517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.362537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.377108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.377128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.386222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.386241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.400730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.400750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.415249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.415269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.423066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.423085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.434549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.434568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.448841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.448868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.457515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.457533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.464071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.464091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.621 [2024-12-16 06:08:14.474428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.621 [2024-12-16 06:08:14.474447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.880 [2024-12-16 06:08:14.488950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.880 [2024-12-16 06:08:14.488970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.880 [2024-12-16 06:08:14.497986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.880 [2024-12-16 06:08:14.498007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.880 [2024-12-16 06:08:14.512159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.880 [2024-12-16 06:08:14.512178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.880 [2024-12-16 06:08:14.523192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.523209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.535437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.535456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.547766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.547785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.559779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.559797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.572644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.572663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.582608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.582628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.596409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.596429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.605348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.605367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.611839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.611863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.622059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.622077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.635997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.636017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.646268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.646288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.660539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.660560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.670004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.670024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.684007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.684026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.694731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.694750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.708964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.708984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.723642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.723661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.881 [2024-12-16 06:08:14.734895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.881 [2024-12-16 06:08:14.734914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.747713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.747732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.759080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.759098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.766050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.766068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.774390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.774409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.787151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.787170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.794665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.794684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.806687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.806706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.819973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.819992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.830037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.830056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.843708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.843727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.854086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.854105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.868650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.868669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.877895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.877914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.892476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.892496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.903430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.903449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.916826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.916845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.925441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.925459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.932238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.932256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 16518.75 IOPS, 129.05 MiB/s [2024-12-16T05:08:14.996Z] [2024-12-16 06:08:14.942385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.942404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.955539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.955558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.966611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.966630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.979754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.979776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.140 [2024-12-16 06:08:14.990567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.140 [2024-12-16 06:08:14.990588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.004260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.004279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.015394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.015412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.027191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.027210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.034222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.034240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.042529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.042549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.054589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.054609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.068291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.068311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.077401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.077421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.084357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.084375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.094518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.094538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.105984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.106003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.119703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.119722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.130544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.130563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.144419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.144438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.155030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.155049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.161547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.161565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.175904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.175925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.186899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.186923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.200225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.200244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.211342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.211362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.223802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.223821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.235118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.235138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.241770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.241789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.250250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.250269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.410 [2024-12-16 06:08:15.264224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.410 [2024-12-16 06:08:15.264244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.275347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.275366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.287823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.287842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.298611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.298629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.312940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.312959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.321910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.321928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.336844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.336868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.347457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.347475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.360393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.360412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.370150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.370169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.384149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.671 [2024-12-16 06:08:15.384168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.671 [2024-12-16 06:08:15.394940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.394958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.408558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.408581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.419005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.419025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.433117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.433136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.442743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.442762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.456888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.456907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.465758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.465776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.480211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.480230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.490810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.490829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.504343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.504364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.513223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.513243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.672 [2024-12-16 06:08:15.519845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.672 [2024-12-16 06:08:15.519872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.530007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.530028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.544210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.544230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.555399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.555418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.567785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.567804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.578456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.578475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.592836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.592862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.602777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.602797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.617350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.617370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.624994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.625016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.634176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.634196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.648081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.648100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.658390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.658409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.672366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.672387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.686927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.686948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.697817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.697837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.712076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.712096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.721488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.721508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.736282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.736303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.751274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.751292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.758988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.759009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.770565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.770584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.931 [2024-12-16 06:08:15.783591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.931 [2024-12-16 06:08:15.783613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.795736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.795756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.806157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.806177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.820509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.820529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.830973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.830993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.842266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.842285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.856771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.856791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.871506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.871525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.882663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.882682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.894601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.894620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.908497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.908516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.918279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.918298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.932558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.932576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 16523.00 IOPS, 129.09 MiB/s [2024-12-16T05:08:16.046Z] [2024-12-16 06:08:15.943643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.943662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 00:39:42.190 Latency(us) 00:39:42.190 [2024-12-16T05:08:16.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:42.190 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:42.190 Nvme1n1 : 5.01 16524.15 129.09 0.00 0.00 7739.06 1989.49 13294.45 00:39:42.190 [2024-12-16T05:08:16.046Z] =================================================================================================================== 00:39:42.190 [2024-12-16T05:08:16.046Z] Total : 16524.15 129.09 0.00 0.00 7739.06 1989.49 13294.45 00:39:42.190 [2024-12-16 06:08:15.951073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.951090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.959069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.959084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.190 [2024-12-16 06:08:15.967070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.190 [2024-12-16 06:08:15.967082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:15.983090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:15.983123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:15.991070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:15.991083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:15.999074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:15.999086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:16.007069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:16.007081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:16.015069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:16.015081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:16.023070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:16.023084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:16.031069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:16.031080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.191 [2024-12-16 06:08:16.039072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.191 [2024-12-16 06:08:16.039087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.047071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.047085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.055067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.055080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.063069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.063081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.071071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.071082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.079071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.079082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.087066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.087077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.095066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.095077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.103068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.103079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.111067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.111077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 [2024-12-16 06:08:16.119066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.450 [2024-12-16 06:08:16.119076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3635565) - No such process 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3635565 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:42.450 delay0 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.450 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:42.450 [2024-12-16 06:08:16.243283] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:50.568 [2024-12-16 06:08:23.289658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7d4b0 is same with the state(6) to be set 00:39:50.568 Initializing NVMe Controllers 00:39:50.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:50.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:50.568 Initialization complete. Launching workers. 00:39:50.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 16971 00:39:50.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17133, failed to submit 103 00:39:50.568 success 17036, unsuccessful 97, failed 0 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:50.568 rmmod nvme_tcp 00:39:50.568 rmmod nvme_fabrics 00:39:50.568 rmmod nvme_keyring 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3633771 ']' 00:39:50.568 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3633771 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3633771 ']' 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3633771 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3633771 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3633771' 00:39:50.569 killing process with pid 3633771 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3633771 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3633771 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.569 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:51.945 00:39:51.945 real 0m31.782s 00:39:51.945 user 0m41.525s 00:39:51.945 sys 0m12.838s 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:51.945 ************************************ 00:39:51.945 END TEST nvmf_zcopy 00:39:51.945 ************************************ 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:51.945 ************************************ 00:39:51.945 START TEST nvmf_nmic 00:39:51.945 ************************************ 00:39:51.945 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:52.204 * Looking for test storage... 00:39:52.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:52.204 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:52.204 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:52.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.205 --rc genhtml_branch_coverage=1 00:39:52.205 --rc genhtml_function_coverage=1 00:39:52.205 --rc genhtml_legend=1 00:39:52.205 --rc geninfo_all_blocks=1 00:39:52.205 --rc geninfo_unexecuted_blocks=1 00:39:52.205 00:39:52.205 ' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:52.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.205 --rc genhtml_branch_coverage=1 00:39:52.205 --rc genhtml_function_coverage=1 00:39:52.205 --rc genhtml_legend=1 00:39:52.205 --rc geninfo_all_blocks=1 00:39:52.205 --rc geninfo_unexecuted_blocks=1 00:39:52.205 00:39:52.205 ' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:52.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.205 --rc genhtml_branch_coverage=1 00:39:52.205 --rc genhtml_function_coverage=1 00:39:52.205 --rc genhtml_legend=1 00:39:52.205 --rc geninfo_all_blocks=1 00:39:52.205 --rc geninfo_unexecuted_blocks=1 00:39:52.205 00:39:52.205 ' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:52.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.205 --rc genhtml_branch_coverage=1 00:39:52.205 --rc genhtml_function_coverage=1 00:39:52.205 --rc genhtml_legend=1 00:39:52.205 --rc geninfo_all_blocks=1 00:39:52.205 --rc geninfo_unexecuted_blocks=1 00:39:52.205 00:39:52.205 ' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:52.205 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:52.206 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:57.475 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:57.475 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:57.475 Found net devices under 0000:af:00.0: cvl_0_0 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:57.475 Found net devices under 0000:af:00.1: cvl_0_1 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:57.475 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:57.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:57.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:39:57.734 00:39:57.734 --- 10.0.0.2 ping statistics --- 00:39:57.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.734 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:57.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:57.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:39:57.734 00:39:57.734 --- 10.0.0.1 ping statistics --- 00:39:57.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.734 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3640954 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3640954 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3640954 ']' 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:57.734 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.734 [2024-12-16 06:08:31.496462] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:57.734 [2024-12-16 06:08:31.497419] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:57.734 [2024-12-16 06:08:31.497455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:57.734 [2024-12-16 06:08:31.558725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:57.993 [2024-12-16 06:08:31.600880] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:57.993 [2024-12-16 06:08:31.600917] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:57.993 [2024-12-16 06:08:31.600924] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:57.993 [2024-12-16 06:08:31.600930] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:57.993 [2024-12-16 06:08:31.600935] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:57.993 [2024-12-16 06:08:31.600972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.993 [2024-12-16 06:08:31.601070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:57.993 [2024-12-16 06:08:31.601163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:57.993 [2024-12-16 06:08:31.601164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.993 [2024-12-16 06:08:31.670765] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:57.993 [2024-12-16 06:08:31.670865] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:57.993 [2024-12-16 06:08:31.671102] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:57.993 [2024-12-16 06:08:31.671368] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:57.993 [2024-12-16 06:08:31.671587] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 [2024-12-16 06:08:31.733901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 Malloc0 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 [2024-12-16 06:08:31.781832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:57.993 test case1: single bdev can't be used in multiple subsystems 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 [2024-12-16 06:08:31.805564] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:57.993 [2024-12-16 06:08:31.805586] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:57.993 [2024-12-16 06:08:31.805593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:57.993 request: 00:39:57.993 { 00:39:57.993 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:57.993 "namespace": { 00:39:57.993 "bdev_name": "Malloc0", 00:39:57.993 "no_auto_visible": false 00:39:57.993 }, 00:39:57.993 "method": "nvmf_subsystem_add_ns", 00:39:57.993 "req_id": 1 00:39:57.993 } 00:39:57.993 Got JSON-RPC error response 00:39:57.993 response: 00:39:57.993 { 00:39:57.993 "code": -32602, 00:39:57.993 "message": "Invalid parameters" 00:39:57.993 } 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:57.993 Adding namespace failed - expected result. 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:57.993 test case2: host connect to nvmf target in multiple paths 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.993 [2024-12-16 06:08:31.813673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:57.993 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:58.252 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:58.510 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:58.510 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:58.510 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:58.510 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:58.510 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:40:01.037 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:01.037 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:01.037 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:01.037 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:01.037 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:01.037 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:40:01.037 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:01.037 [global] 00:40:01.037 thread=1 00:40:01.037 invalidate=1 00:40:01.037 rw=write 00:40:01.037 time_based=1 00:40:01.037 runtime=1 00:40:01.037 ioengine=libaio 00:40:01.037 direct=1 00:40:01.037 bs=4096 00:40:01.037 iodepth=1 00:40:01.037 norandommap=0 00:40:01.037 numjobs=1 00:40:01.037 00:40:01.037 verify_dump=1 00:40:01.037 verify_backlog=512 00:40:01.037 verify_state_save=0 00:40:01.037 do_verify=1 00:40:01.037 verify=crc32c-intel 00:40:01.037 [job0] 00:40:01.037 filename=/dev/nvme0n1 00:40:01.037 Could not set queue depth (nvme0n1) 00:40:01.037 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:01.037 fio-3.35 00:40:01.037 Starting 1 thread 00:40:01.968 00:40:01.968 job0: (groupid=0, jobs=1): err= 0: pid=3641631: Mon Dec 16 06:08:35 2024 00:40:01.968 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:40:01.968 slat (nsec): min=6115, max=25911, avg=6862.39, stdev=815.24 00:40:01.968 clat (usec): min=184, max=273, avg=215.93, stdev=22.39 00:40:01.968 lat (usec): min=196, max=281, avg=222.79, stdev=22.38 00:40:01.968 clat percentiles (usec): 00:40:01.968 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 200], 00:40:01.968 | 30.00th=[ 202], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:40:01.968 | 70.00th=[ 212], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 258], 00:40:01.968 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 269], 99.95th=[ 269], 00:40:01.968 | 99.99th=[ 273] 00:40:01.968 write: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:40:01.968 slat (nsec): min=8821, max=41215, avg=9897.92, stdev=1283.82 00:40:01.968 clat (usec): min=117, max=383, avg=142.33, stdev=14.68 00:40:01.968 lat (usec): min=134, max=425, avg=152.23, stdev=14.93 00:40:01.968 clat percentiles (usec): 00:40:01.968 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:40:01.968 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:40:01.968 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 165], 00:40:01.968 | 99.00th=[ 194], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 302], 00:40:01.968 | 99.99th=[ 383] 00:40:01.968 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:40:01.968 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:40:01.968 lat (usec) : 250=92.52%, 500=7.48% 00:40:01.968 cpu : usr=2.20%, sys=5.00%, ctx=5307, majf=0, minf=1 00:40:01.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:01.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.968 issued rwts: total=2560,2747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:01.968 00:40:01.968 Run status group 0 (all jobs): 00:40:01.968 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:40:01.968 WRITE: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.3MB), run=1001-1001msec 00:40:01.968 00:40:01.968 Disk stats (read/write): 00:40:01.968 nvme0n1: ios=2350/2560, merge=0/0, ticks=499/342, in_queue=841, util=91.38% 00:40:01.968 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:02.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:02.226 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:02.226 rmmod nvme_tcp 00:40:02.226 rmmod nvme_fabrics 00:40:02.226 rmmod nvme_keyring 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3640954 ']' 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3640954 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3640954 ']' 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3640954 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:02.226 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3640954 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3640954' 00:40:02.485 killing process with pid 3640954 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3640954 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3640954 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.485 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.016 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:05.017 00:40:05.017 real 0m12.624s 00:40:05.017 user 0m24.228s 00:40:05.017 sys 0m5.800s 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:05.017 ************************************ 00:40:05.017 END TEST nvmf_nmic 00:40:05.017 ************************************ 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:05.017 ************************************ 00:40:05.017 START TEST nvmf_fio_target 00:40:05.017 ************************************ 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:05.017 * Looking for test storage... 00:40:05.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.017 --rc genhtml_branch_coverage=1 00:40:05.017 --rc genhtml_function_coverage=1 00:40:05.017 --rc genhtml_legend=1 00:40:05.017 --rc geninfo_all_blocks=1 00:40:05.017 --rc geninfo_unexecuted_blocks=1 00:40:05.017 00:40:05.017 ' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.017 --rc genhtml_branch_coverage=1 00:40:05.017 --rc genhtml_function_coverage=1 00:40:05.017 --rc genhtml_legend=1 00:40:05.017 --rc geninfo_all_blocks=1 00:40:05.017 --rc geninfo_unexecuted_blocks=1 00:40:05.017 00:40:05.017 ' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.017 --rc genhtml_branch_coverage=1 00:40:05.017 --rc genhtml_function_coverage=1 00:40:05.017 --rc genhtml_legend=1 00:40:05.017 --rc geninfo_all_blocks=1 00:40:05.017 --rc geninfo_unexecuted_blocks=1 00:40:05.017 00:40:05.017 ' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.017 --rc genhtml_branch_coverage=1 00:40:05.017 --rc genhtml_function_coverage=1 00:40:05.017 --rc genhtml_legend=1 00:40:05.017 --rc geninfo_all_blocks=1 00:40:05.017 --rc geninfo_unexecuted_blocks=1 00:40:05.017 00:40:05.017 ' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:05.017 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:05.018 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:10.285 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:10.286 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:10.286 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:10.286 Found net devices under 0000:af:00.0: cvl_0_0 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:10.286 Found net devices under 0000:af:00.1: cvl_0_1 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:10.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:10.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:40:10.286 00:40:10.286 --- 10.0.0.2 ping statistics --- 00:40:10.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.286 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:10.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:10.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:40:10.286 00:40:10.286 --- 10.0.0.1 ping statistics --- 00:40:10.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.286 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3645105 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3645105 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3645105 ']' 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:10.286 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.286 [2024-12-16 06:08:43.916482] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:10.287 [2024-12-16 06:08:43.917387] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:10.287 [2024-12-16 06:08:43.917419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:10.287 [2024-12-16 06:08:43.977623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:10.287 [2024-12-16 06:08:44.017988] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.287 [2024-12-16 06:08:44.018026] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.287 [2024-12-16 06:08:44.018033] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.287 [2024-12-16 06:08:44.018039] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.287 [2024-12-16 06:08:44.018044] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.287 [2024-12-16 06:08:44.018081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.287 [2024-12-16 06:08:44.018183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:10.287 [2024-12-16 06:08:44.018205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:10.287 [2024-12-16 06:08:44.018205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.287 [2024-12-16 06:08:44.091046] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.287 [2024-12-16 06:08:44.091123] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.287 [2024-12-16 06:08:44.091281] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:10.287 [2024-12-16 06:08:44.091581] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.287 [2024-12-16 06:08:44.091791] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:10.287 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:10.287 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:40:10.287 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:10.287 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:10.287 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.545 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.545 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:10.545 [2024-12-16 06:08:44.322947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.545 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:10.804 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:10.804 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.062 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:11.062 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.320 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:11.320 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.320 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:11.320 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:11.577 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.835 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:11.835 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:12.092 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:12.092 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:12.350 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:12.350 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:12.350 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:12.608 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:12.608 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:12.865 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:12.865 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:12.865 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:13.122 [2024-12-16 06:08:46.858844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:13.122 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:13.379 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:13.636 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:13.894 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:13.894 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:40:13.894 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:13.894 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:40:13.894 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:40:13.894 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:40:15.793 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:15.793 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:15.793 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:15.793 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:40:15.793 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:15.793 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:40:15.793 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:15.793 [global] 00:40:15.793 thread=1 00:40:15.793 invalidate=1 00:40:15.793 rw=write 00:40:15.793 time_based=1 00:40:15.793 runtime=1 00:40:15.793 ioengine=libaio 00:40:15.793 direct=1 00:40:15.793 bs=4096 00:40:15.793 iodepth=1 00:40:15.793 norandommap=0 00:40:15.793 numjobs=1 00:40:15.793 00:40:15.793 verify_dump=1 00:40:15.793 verify_backlog=512 00:40:15.793 verify_state_save=0 00:40:15.793 do_verify=1 00:40:15.793 verify=crc32c-intel 00:40:15.793 [job0] 00:40:15.793 filename=/dev/nvme0n1 00:40:15.793 [job1] 00:40:15.793 filename=/dev/nvme0n2 00:40:15.793 [job2] 00:40:15.793 filename=/dev/nvme0n3 00:40:15.793 [job3] 00:40:15.793 filename=/dev/nvme0n4 00:40:15.793 Could not set queue depth (nvme0n1) 00:40:15.793 Could not set queue depth (nvme0n2) 00:40:15.793 Could not set queue depth (nvme0n3) 00:40:15.793 Could not set queue depth (nvme0n4) 00:40:16.051 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.051 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.051 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.051 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.051 fio-3.35 00:40:16.051 Starting 4 threads 00:40:17.426 00:40:17.426 job0: (groupid=0, jobs=1): err= 0: pid=3646201: Mon Dec 16 06:08:51 2024 00:40:17.426 read: IOPS=368, BW=1473KiB/s (1509kB/s)(1488KiB/1010msec) 00:40:17.426 slat (nsec): min=6829, max=23627, avg=8439.15, stdev=3013.44 00:40:17.426 clat (usec): min=190, max=41030, avg=2414.66, stdev=9110.53 00:40:17.426 lat (usec): min=198, max=41042, avg=2423.10, stdev=9112.98 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 212], 20.00th=[ 229], 00:40:17.426 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 260], 00:40:17.426 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[40633], 00:40:17.426 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:17.426 | 99.99th=[41157] 00:40:17.426 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:40:17.426 slat (nsec): min=9742, max=60467, avg=11792.89, stdev=3318.06 00:40:17.426 clat (usec): min=142, max=371, avg=193.26, stdev=28.07 00:40:17.426 lat (usec): min=154, max=383, avg=205.05, stdev=29.05 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:40:17.426 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:40:17.426 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 239], 00:40:17.426 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 371], 00:40:17.426 | 99.99th=[ 371] 00:40:17.426 bw ( KiB/s): min= 4096, max= 4096, per=40.88%, avg=4096.00, stdev= 0.00, samples=1 00:40:17.426 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:17.426 lat (usec) : 250=75.90%, 500=21.83% 00:40:17.426 lat (msec) : 50=2.26% 00:40:17.426 cpu : usr=0.59%, sys=1.49%, ctx=885, majf=0, minf=1 00:40:17.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 issued rwts: total=372,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.426 job1: (groupid=0, jobs=1): err= 0: pid=3646202: Mon Dec 16 06:08:51 2024 00:40:17.426 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:40:17.426 slat (nsec): min=10776, max=36375, avg=20865.32, stdev=4886.00 00:40:17.426 clat (usec): min=40518, max=41181, avg=40955.36, stdev=109.86 00:40:17.426 lat (usec): min=40529, max=41194, avg=40976.22, stdev=111.33 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:17.426 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:17.426 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:17.426 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:17.426 | 99.99th=[41157] 00:40:17.426 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:40:17.426 slat (nsec): min=9894, max=43874, avg=12232.15, stdev=3462.54 00:40:17.426 clat (usec): min=149, max=587, avg=187.61, stdev=31.07 00:40:17.426 lat (usec): min=162, max=631, avg=199.85, stdev=32.58 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:40:17.426 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:40:17.426 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 227], 00:40:17.426 | 99.00th=[ 310], 99.50th=[ 408], 99.90th=[ 586], 99.95th=[ 586], 00:40:17.426 | 99.99th=[ 586] 00:40:17.426 bw ( KiB/s): min= 4096, max= 4096, per=40.88%, avg=4096.00, stdev= 0.00, samples=1 00:40:17.426 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:17.426 lat (usec) : 250=93.26%, 500=2.43%, 750=0.19% 00:40:17.426 lat (msec) : 50=4.12% 00:40:17.426 cpu : usr=0.50%, sys=0.90%, ctx=534, majf=0, minf=1 00:40:17.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.426 job2: (groupid=0, jobs=1): err= 0: pid=3646207: Mon Dec 16 06:08:51 2024 00:40:17.426 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:40:17.426 slat (nsec): min=9880, max=23665, avg=21511.95, stdev=2787.92 00:40:17.426 clat (usec): min=40773, max=41083, avg=40960.72, stdev=68.64 00:40:17.426 lat (usec): min=40783, max=41105, avg=40982.23, stdev=70.29 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:17.426 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:17.426 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:17.426 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:17.426 | 99.99th=[41157] 00:40:17.426 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:40:17.426 slat (nsec): min=10165, max=56273, avg=13373.10, stdev=4664.89 00:40:17.426 clat (usec): min=147, max=1585, avg=203.53, stdev=86.59 00:40:17.426 lat (usec): min=161, max=1599, avg=216.90, stdev=86.77 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:40:17.426 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:40:17.426 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 231], 95.00th=[ 247], 00:40:17.426 | 99.00th=[ 363], 99.50th=[ 445], 99.90th=[ 1582], 99.95th=[ 1582], 00:40:17.426 | 99.99th=[ 1582] 00:40:17.426 bw ( KiB/s): min= 4096, max= 4096, per=40.88%, avg=4096.00, stdev= 0.00, samples=1 00:40:17.426 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:17.426 lat (usec) : 250=91.57%, 500=3.93% 00:40:17.426 lat (msec) : 2=0.37%, 50=4.12% 00:40:17.426 cpu : usr=0.20%, sys=0.99%, ctx=534, majf=0, minf=1 00:40:17.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.426 job3: (groupid=0, jobs=1): err= 0: pid=3646210: Mon Dec 16 06:08:51 2024 00:40:17.426 read: IOPS=517, BW=2070KiB/s (2120kB/s)(2116KiB/1022msec) 00:40:17.426 slat (nsec): min=7188, max=61544, avg=9906.21, stdev=4037.64 00:40:17.426 clat (usec): min=185, max=41244, avg=1537.11, stdev=7195.45 00:40:17.426 lat (usec): min=202, max=41289, avg=1547.02, stdev=7198.05 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:40:17.426 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:40:17.426 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 289], 00:40:17.426 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:17.426 | 99.99th=[41157] 00:40:17.426 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:40:17.426 slat (nsec): min=10638, max=60263, avg=12661.13, stdev=2923.57 00:40:17.426 clat (usec): min=129, max=1828, avg=177.34, stdev=61.32 00:40:17.426 lat (usec): min=143, max=1840, avg=190.00, stdev=61.78 00:40:17.426 clat percentiles (usec): 00:40:17.426 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:40:17.426 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 180], 00:40:17.426 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 212], 95.00th=[ 229], 00:40:17.426 | 99.00th=[ 297], 99.50th=[ 330], 99.90th=[ 453], 99.95th=[ 1827], 00:40:17.426 | 99.99th=[ 1827] 00:40:17.426 bw ( KiB/s): min= 8192, max= 8192, per=81.76%, avg=8192.00, stdev= 0.00, samples=1 00:40:17.426 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:17.426 lat (usec) : 250=94.01%, 500=4.83% 00:40:17.426 lat (msec) : 2=0.06%, 50=1.09% 00:40:17.426 cpu : usr=0.69%, sys=3.23%, ctx=1557, majf=0, minf=1 00:40:17.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.426 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.426 00:40:17.426 Run status group 0 (all jobs): 00:40:17.426 READ: bw=3699KiB/s (3787kB/s), 86.8KiB/s-2070KiB/s (88.9kB/s-2120kB/s), io=3780KiB (3871kB), run=1006-1022msec 00:40:17.426 WRITE: bw=9.78MiB/s (10.3MB/s), 2020KiB/s-4008KiB/s (2068kB/s-4104kB/s), io=10.0MiB (10.5MB), run=1006-1022msec 00:40:17.426 00:40:17.426 Disk stats (read/write): 00:40:17.426 nvme0n1: ios=418/512, merge=0/0, ticks=838/93, in_queue=931, util=90.98% 00:40:17.426 nvme0n2: ios=61/512, merge=0/0, ticks=797/86, in_queue=883, util=91.16% 00:40:17.426 nvme0n3: ios=18/512, merge=0/0, ticks=738/101, in_queue=839, util=88.97% 00:40:17.426 nvme0n4: ios=548/1024, merge=0/0, ticks=1594/161, in_queue=1755, util=98.11% 00:40:17.426 06:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:17.426 [global] 00:40:17.426 thread=1 00:40:17.426 invalidate=1 00:40:17.426 rw=randwrite 00:40:17.426 time_based=1 00:40:17.426 runtime=1 00:40:17.426 ioengine=libaio 00:40:17.426 direct=1 00:40:17.426 bs=4096 00:40:17.426 iodepth=1 00:40:17.426 norandommap=0 00:40:17.426 numjobs=1 00:40:17.426 00:40:17.426 verify_dump=1 00:40:17.426 verify_backlog=512 00:40:17.426 verify_state_save=0 00:40:17.426 do_verify=1 00:40:17.426 verify=crc32c-intel 00:40:17.426 [job0] 00:40:17.426 filename=/dev/nvme0n1 00:40:17.426 [job1] 00:40:17.426 filename=/dev/nvme0n2 00:40:17.426 [job2] 00:40:17.426 filename=/dev/nvme0n3 00:40:17.426 [job3] 00:40:17.426 filename=/dev/nvme0n4 00:40:17.426 Could not set queue depth (nvme0n1) 00:40:17.426 Could not set queue depth (nvme0n2) 00:40:17.426 Could not set queue depth (nvme0n3) 00:40:17.427 Could not set queue depth (nvme0n4) 00:40:17.685 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.685 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.685 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.685 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.685 fio-3.35 00:40:17.685 Starting 4 threads 00:40:19.059 00:40:19.059 job0: (groupid=0, jobs=1): err= 0: pid=3646602: Mon Dec 16 06:08:52 2024 00:40:19.059 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:40:19.059 slat (nsec): min=10933, max=25752, avg=21543.45, stdev=2673.31 00:40:19.059 clat (usec): min=40491, max=41028, avg=40943.76, stdev=109.44 00:40:19.059 lat (usec): min=40502, max=41050, avg=40965.31, stdev=111.34 00:40:19.059 clat percentiles (usec): 00:40:19.059 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:19.059 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:19.059 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:19.059 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:19.059 | 99.99th=[41157] 00:40:19.059 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:40:19.059 slat (nsec): min=11291, max=65993, avg=12583.98, stdev=2675.28 00:40:19.059 clat (usec): min=150, max=285, avg=187.17, stdev=12.27 00:40:19.059 lat (usec): min=162, max=351, avg=199.75, stdev=13.47 00:40:19.059 clat percentiles (usec): 00:40:19.059 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 180], 00:40:19.059 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:40:19.059 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:40:19.059 | 99.00th=[ 227], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 285], 00:40:19.059 | 99.99th=[ 285] 00:40:19.059 bw ( KiB/s): min= 4096, max= 4096, per=25.18%, avg=4096.00, stdev= 0.00, samples=1 00:40:19.059 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:19.059 lat (usec) : 250=95.32%, 500=0.56% 00:40:19.059 lat (msec) : 50=4.12% 00:40:19.059 cpu : usr=0.20%, sys=1.29%, ctx=535, majf=0, minf=2 00:40:19.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.059 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.059 job1: (groupid=0, jobs=1): err= 0: pid=3646614: Mon Dec 16 06:08:52 2024 00:40:19.059 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:40:19.059 slat (nsec): min=10606, max=23303, avg=22003.82, stdev=2566.92 00:40:19.059 clat (usec): min=40889, max=41421, avg=40985.22, stdev=105.99 00:40:19.059 lat (usec): min=40912, max=41431, avg=41007.23, stdev=103.62 00:40:19.059 clat percentiles (usec): 00:40:19.059 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:19.059 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:19.059 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:19.059 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:19.059 | 99.99th=[41681] 00:40:19.059 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:40:19.059 slat (nsec): min=9924, max=37763, avg=11557.58, stdev=1928.78 00:40:19.059 clat (usec): min=158, max=301, avg=185.24, stdev= 9.87 00:40:19.059 lat (usec): min=169, max=339, avg=196.79, stdev=10.30 00:40:19.059 clat percentiles (usec): 00:40:19.059 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:40:19.059 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:40:19.059 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 198], 00:40:19.059 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 302], 99.95th=[ 302], 00:40:19.059 | 99.99th=[ 302] 00:40:19.060 bw ( KiB/s): min= 4096, max= 4096, per=25.18%, avg=4096.00, stdev= 0.00, samples=1 00:40:19.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:19.060 lat (usec) : 250=95.69%, 500=0.19% 00:40:19.060 lat (msec) : 50=4.12% 00:40:19.060 cpu : usr=0.50%, sys=0.30%, ctx=535, majf=0, minf=1 00:40:19.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.060 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.060 job2: (groupid=0, jobs=1): err= 0: pid=3646630: Mon Dec 16 06:08:52 2024 00:40:19.060 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:40:19.060 slat (nsec): min=8370, max=10580, avg=9524.00, stdev=642.86 00:40:19.060 clat (usec): min=40882, max=41122, avg=40983.02, stdev=51.02 00:40:19.060 lat (usec): min=40891, max=41132, avg=40992.54, stdev=50.79 00:40:19.060 clat percentiles (usec): 00:40:19.060 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:19.060 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:19.060 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:19.060 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:19.060 | 99.99th=[41157] 00:40:19.060 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:40:19.060 slat (nsec): min=9885, max=39065, avg=11090.24, stdev=1776.49 00:40:19.060 clat (usec): min=152, max=299, avg=189.18, stdev=12.75 00:40:19.060 lat (usec): min=163, max=338, avg=200.27, stdev=13.28 00:40:19.060 clat percentiles (usec): 00:40:19.060 | 1.00th=[ 159], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 182], 00:40:19.060 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:40:19.060 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:40:19.060 | 99.00th=[ 225], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 302], 00:40:19.060 | 99.99th=[ 302] 00:40:19.060 bw ( KiB/s): min= 4096, max= 4096, per=25.18%, avg=4096.00, stdev= 0.00, samples=1 00:40:19.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:19.060 lat (usec) : 250=95.13%, 500=0.75% 00:40:19.060 lat (msec) : 50=4.12% 00:40:19.060 cpu : usr=0.30%, sys=0.50%, ctx=538, majf=0, minf=1 00:40:19.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.060 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.060 job3: (groupid=0, jobs=1): err= 0: pid=3646635: Mon Dec 16 06:08:52 2024 00:40:19.060 read: IOPS=2313, BW=9255KiB/s (9477kB/s)(9264KiB/1001msec) 00:40:19.060 slat (nsec): min=7721, max=38875, avg=8680.06, stdev=1525.16 00:40:19.060 clat (usec): min=184, max=313, avg=220.57, stdev=11.43 00:40:19.060 lat (usec): min=193, max=322, avg=229.25, stdev=11.51 00:40:19.060 clat percentiles (usec): 00:40:19.060 | 1.00th=[ 206], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 212], 00:40:19.060 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:40:19.060 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 247], 00:40:19.060 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 265], 99.95th=[ 293], 00:40:19.060 | 99.99th=[ 314] 00:40:19.060 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:19.060 slat (nsec): min=9383, max=37125, avg=12250.34, stdev=1722.03 00:40:19.060 clat (usec): min=133, max=286, avg=164.60, stdev=22.11 00:40:19.060 lat (usec): min=145, max=298, avg=176.85, stdev=22.23 00:40:19.060 clat percentiles (usec): 00:40:19.060 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:40:19.060 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 159], 00:40:19.060 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 198], 00:40:19.060 | 99.00th=[ 215], 99.50th=[ 258], 99.90th=[ 273], 99.95th=[ 277], 00:40:19.060 | 99.99th=[ 285] 00:40:19.060 bw ( KiB/s): min=11320, max=11320, per=69.58%, avg=11320.00, stdev= 0.00, samples=1 00:40:19.060 iops : min= 2830, max= 2830, avg=2830.00, stdev= 0.00, samples=1 00:40:19.060 lat (usec) : 250=98.44%, 500=1.56% 00:40:19.060 cpu : usr=3.80%, sys=8.40%, ctx=4879, majf=0, minf=1 00:40:19.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.060 issued rwts: total=2316,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.060 00:40:19.060 Run status group 0 (all jobs): 00:40:19.060 READ: bw=9462KiB/s (9689kB/s), 87.4KiB/s-9255KiB/s (89.5kB/s-9477kB/s), io=9528KiB (9757kB), run=1001-1007msec 00:40:19.060 WRITE: bw=15.9MiB/s (16.7MB/s), 2034KiB/s-9.99MiB/s (2083kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1007msec 00:40:19.060 00:40:19.060 Disk stats (read/write): 00:40:19.060 nvme0n1: ios=68/512, merge=0/0, ticks=762/93, in_queue=855, util=86.97% 00:40:19.060 nvme0n2: ios=23/512, merge=0/0, ticks=744/94, in_queue=838, util=87.01% 00:40:19.060 nvme0n3: ios=42/512, merge=0/0, ticks=1722/95, in_queue=1817, util=98.34% 00:40:19.060 nvme0n4: ios=2074/2096, merge=0/0, ticks=1395/320, in_queue=1715, util=98.64% 00:40:19.060 06:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:19.060 [global] 00:40:19.060 thread=1 00:40:19.060 invalidate=1 00:40:19.060 rw=write 00:40:19.060 time_based=1 00:40:19.060 runtime=1 00:40:19.060 ioengine=libaio 00:40:19.060 direct=1 00:40:19.060 bs=4096 00:40:19.060 iodepth=128 00:40:19.060 norandommap=0 00:40:19.060 numjobs=1 00:40:19.060 00:40:19.060 verify_dump=1 00:40:19.060 verify_backlog=512 00:40:19.060 verify_state_save=0 00:40:19.060 do_verify=1 00:40:19.060 verify=crc32c-intel 00:40:19.060 [job0] 00:40:19.060 filename=/dev/nvme0n1 00:40:19.060 [job1] 00:40:19.060 filename=/dev/nvme0n2 00:40:19.060 [job2] 00:40:19.060 filename=/dev/nvme0n3 00:40:19.060 [job3] 00:40:19.060 filename=/dev/nvme0n4 00:40:19.060 Could not set queue depth (nvme0n1) 00:40:19.060 Could not set queue depth (nvme0n2) 00:40:19.060 Could not set queue depth (nvme0n3) 00:40:19.060 Could not set queue depth (nvme0n4) 00:40:19.318 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.318 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.318 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.318 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.318 fio-3.35 00:40:19.318 Starting 4 threads 00:40:20.689 00:40:20.689 job0: (groupid=0, jobs=1): err= 0: pid=3647018: Mon Dec 16 06:08:54 2024 00:40:20.689 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:40:20.689 slat (nsec): min=1243, max=21842k, avg=75042.46, stdev=659330.22 00:40:20.689 clat (usec): min=3826, max=35662, avg=10255.22, stdev=5373.54 00:40:20.689 lat (usec): min=3833, max=35671, avg=10330.26, stdev=5410.72 00:40:20.689 clat percentiles (usec): 00:40:20.689 | 1.00th=[ 4228], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6849], 00:40:20.689 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 9634], 00:40:20.689 | 70.00th=[10290], 80.00th=[12387], 90.00th=[16712], 95.00th=[22676], 00:40:20.689 | 99.00th=[31589], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:40:20.689 | 99.99th=[35914] 00:40:20.689 write: IOPS=6385, BW=24.9MiB/s (26.2MB/s)(25.1MiB/1008msec); 0 zone resets 00:40:20.689 slat (usec): min=2, max=21753, avg=74.88, stdev=666.59 00:40:20.689 clat (usec): min=3106, max=34304, avg=10085.44, stdev=5470.41 00:40:20.689 lat (usec): min=3123, max=40564, avg=10160.31, stdev=5506.10 00:40:20.689 clat percentiles (usec): 00:40:20.689 | 1.00th=[ 4047], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 6456], 00:40:20.689 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[10028], 00:40:20.689 | 70.00th=[10552], 80.00th=[12649], 90.00th=[16188], 95.00th=[23462], 00:40:20.689 | 99.00th=[32637], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:40:20.689 | 99.99th=[34341] 00:40:20.689 bw ( KiB/s): min=17920, max=32560, per=39.02%, avg=25240.00, stdev=10352.04, samples=2 00:40:20.689 iops : min= 4480, max= 8140, avg=6310.00, stdev=2588.01, samples=2 00:40:20.689 lat (msec) : 4=0.52%, 10=61.77%, 20=30.86%, 50=6.86% 00:40:20.689 cpu : usr=6.16%, sys=6.85%, ctx=359, majf=0, minf=1 00:40:20.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:20.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.689 issued rwts: total=6144,6437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.689 job1: (groupid=0, jobs=1): err= 0: pid=3647028: Mon Dec 16 06:08:54 2024 00:40:20.689 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:40:20.689 slat (nsec): min=1020, max=20753k, avg=125890.11, stdev=939313.95 00:40:20.689 clat (usec): min=6630, max=62587, avg=14852.36, stdev=7551.07 00:40:20.689 lat (usec): min=6634, max=62611, avg=14978.25, stdev=7638.57 00:40:20.689 clat percentiles (usec): 00:40:20.689 | 1.00th=[ 7635], 5.00th=[ 7832], 10.00th=[ 9765], 20.00th=[10159], 00:40:20.689 | 30.00th=[10421], 40.00th=[11863], 50.00th=[12518], 60.00th=[14353], 00:40:20.689 | 70.00th=[14877], 80.00th=[16319], 90.00th=[23725], 95.00th=[32900], 00:40:20.689 | 99.00th=[44827], 99.50th=[44827], 99.90th=[57410], 99.95th=[57410], 00:40:20.689 | 99.99th=[62653] 00:40:20.689 write: IOPS=3694, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1006msec); 0 zone resets 00:40:20.689 slat (nsec): min=1774, max=19874k, avg=140514.86, stdev=955638.02 00:40:20.689 clat (usec): min=2378, max=89949, avg=19938.98, stdev=16336.96 00:40:20.689 lat (usec): min=5461, max=89960, avg=20079.49, stdev=16440.40 00:40:20.689 clat percentiles (usec): 00:40:20.689 | 1.00th=[ 7242], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[10159], 00:40:20.689 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13042], 60.00th=[14746], 00:40:20.689 | 70.00th=[19792], 80.00th=[26084], 90.00th=[40633], 95.00th=[57410], 00:40:20.689 | 99.00th=[89654], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:40:20.689 | 99.99th=[89654] 00:40:20.689 bw ( KiB/s): min=11864, max=16856, per=22.20%, avg=14360.00, stdev=3529.88, samples=2 00:40:20.689 iops : min= 2966, max= 4214, avg=3590.00, stdev=882.47, samples=2 00:40:20.689 lat (msec) : 4=0.01%, 10=13.45%, 20=64.58%, 50=18.24%, 100=3.71% 00:40:20.689 cpu : usr=2.49%, sys=5.67%, ctx=235, majf=0, minf=2 00:40:20.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:40:20.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.689 issued rwts: total=3584,3717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.689 job2: (groupid=0, jobs=1): err= 0: pid=3647039: Mon Dec 16 06:08:54 2024 00:40:20.689 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:40:20.689 slat (nsec): min=1092, max=21301k, avg=165206.98, stdev=1161421.40 00:40:20.689 clat (usec): min=6050, max=65142, avg=20088.02, stdev=9627.73 00:40:20.689 lat (usec): min=6054, max=65150, avg=20253.23, stdev=9723.27 00:40:20.689 clat percentiles (usec): 00:40:20.689 | 1.00th=[ 7177], 5.00th=[10290], 10.00th=[11338], 20.00th=[12780], 00:40:20.690 | 30.00th=[13304], 40.00th=[16188], 50.00th=[18744], 60.00th=[19792], 00:40:20.690 | 70.00th=[20841], 80.00th=[25035], 90.00th=[33817], 95.00th=[40109], 00:40:20.690 | 99.00th=[53740], 99.50th=[59507], 99.90th=[65274], 99.95th=[65274], 00:40:20.690 | 99.99th=[65274] 00:40:20.690 write: IOPS=2848, BW=11.1MiB/s (11.7MB/s)(11.3MiB/1016msec); 0 zone resets 00:40:20.690 slat (usec): min=2, max=19799, avg=170.77, stdev=772.04 00:40:20.690 clat (usec): min=3098, max=65140, avg=26803.57, stdev=14200.26 00:40:20.690 lat (usec): min=3108, max=65153, avg=26974.34, stdev=14303.42 00:40:20.690 clat percentiles (usec): 00:40:20.690 | 1.00th=[ 6063], 5.00th=[ 7177], 10.00th=[ 8717], 20.00th=[10945], 00:40:20.690 | 30.00th=[12125], 40.00th=[20841], 50.00th=[26346], 60.00th=[34866], 00:40:20.690 | 70.00th=[39060], 80.00th=[40109], 90.00th=[45351], 95.00th=[49021], 00:40:20.690 | 99.00th=[52691], 99.50th=[54264], 99.90th=[54789], 99.95th=[65274], 00:40:20.690 | 99.99th=[65274] 00:40:20.690 bw ( KiB/s): min= 9472, max=12656, per=17.10%, avg=11064.00, stdev=2251.43, samples=2 00:40:20.690 iops : min= 2368, max= 3164, avg=2766.00, stdev=562.86, samples=2 00:40:20.690 lat (msec) : 4=0.11%, 10=9.92%, 20=38.71%, 50=48.94%, 100=2.33% 00:40:20.690 cpu : usr=1.77%, sys=3.25%, ctx=296, majf=0, minf=1 00:40:20.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:40:20.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.690 issued rwts: total=2560,2894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.690 job3: (groupid=0, jobs=1): err= 0: pid=3647042: Mon Dec 16 06:08:54 2024 00:40:20.690 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:40:20.690 slat (nsec): min=1864, max=16355k, avg=138873.26, stdev=1013203.46 00:40:20.690 clat (usec): min=3723, max=60705, avg=17523.08, stdev=8701.79 00:40:20.690 lat (usec): min=4316, max=60715, avg=17661.96, stdev=8780.69 00:40:20.690 clat percentiles (usec): 00:40:20.690 | 1.00th=[ 5014], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[11338], 00:40:20.690 | 30.00th=[13042], 40.00th=[14091], 50.00th=[15926], 60.00th=[17957], 00:40:20.690 | 70.00th=[18744], 80.00th=[20841], 90.00th=[27395], 95.00th=[33817], 00:40:20.690 | 99.00th=[51643], 99.50th=[56361], 99.90th=[60556], 99.95th=[60556], 00:40:20.690 | 99.99th=[60556] 00:40:20.690 write: IOPS=3341, BW=13.1MiB/s (13.7MB/s)(13.3MiB/1017msec); 0 zone resets 00:40:20.690 slat (usec): min=3, max=17411, avg=156.41, stdev=909.25 00:40:20.690 clat (usec): min=3808, max=60710, avg=22116.97, stdev=11803.91 00:40:20.690 lat (usec): min=3818, max=60725, avg=22273.38, stdev=11892.53 00:40:20.690 clat percentiles (usec): 00:40:20.690 | 1.00th=[ 6128], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[11863], 00:40:20.690 | 30.00th=[15533], 40.00th=[16319], 50.00th=[17695], 60.00th=[20055], 00:40:20.690 | 70.00th=[26346], 80.00th=[35914], 90.00th=[39584], 95.00th=[46400], 00:40:20.690 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[60556], 00:40:20.690 | 99.99th=[60556] 00:40:20.690 bw ( KiB/s): min= 9472, max=16688, per=20.22%, avg=13080.00, stdev=5102.48, samples=2 00:40:20.690 iops : min= 2368, max= 4172, avg=3270.00, stdev=1275.62, samples=2 00:40:20.690 lat (msec) : 4=0.11%, 10=14.02%, 20=51.21%, 50=33.74%, 100=0.93% 00:40:20.690 cpu : usr=2.85%, sys=4.82%, ctx=257, majf=0, minf=1 00:40:20.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:40:20.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.690 issued rwts: total=3072,3398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.690 00:40:20.690 Run status group 0 (all jobs): 00:40:20.690 READ: bw=59.0MiB/s (61.9MB/s), 9.84MiB/s-23.8MiB/s (10.3MB/s-25.0MB/s), io=60.0MiB (62.9MB), run=1006-1017msec 00:40:20.690 WRITE: bw=63.2MiB/s (66.2MB/s), 11.1MiB/s-24.9MiB/s (11.7MB/s-26.2MB/s), io=64.2MiB (67.4MB), run=1006-1017msec 00:40:20.690 00:40:20.690 Disk stats (read/write): 00:40:20.690 nvme0n1: ios=4992/5120, merge=0/0, ticks=51452/52976, in_queue=104428, util=86.87% 00:40:20.690 nvme0n2: ios=3109/3472, merge=0/0, ticks=21354/28373, in_queue=49727, util=88.43% 00:40:20.690 nvme0n3: ios=2048/2559, merge=0/0, ticks=37964/65906, in_queue=103870, util=88.98% 00:40:20.690 nvme0n4: ios=2560/3063, merge=0/0, ticks=40004/63382, in_queue=103386, util=89.73% 00:40:20.690 06:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:20.690 [global] 00:40:20.690 thread=1 00:40:20.690 invalidate=1 00:40:20.690 rw=randwrite 00:40:20.690 time_based=1 00:40:20.690 runtime=1 00:40:20.690 ioengine=libaio 00:40:20.690 direct=1 00:40:20.690 bs=4096 00:40:20.690 iodepth=128 00:40:20.690 norandommap=0 00:40:20.690 numjobs=1 00:40:20.690 00:40:20.690 verify_dump=1 00:40:20.690 verify_backlog=512 00:40:20.690 verify_state_save=0 00:40:20.690 do_verify=1 00:40:20.690 verify=crc32c-intel 00:40:20.690 [job0] 00:40:20.690 filename=/dev/nvme0n1 00:40:20.690 [job1] 00:40:20.690 filename=/dev/nvme0n2 00:40:20.690 [job2] 00:40:20.690 filename=/dev/nvme0n3 00:40:20.690 [job3] 00:40:20.690 filename=/dev/nvme0n4 00:40:20.690 Could not set queue depth (nvme0n1) 00:40:20.690 Could not set queue depth (nvme0n2) 00:40:20.690 Could not set queue depth (nvme0n3) 00:40:20.690 Could not set queue depth (nvme0n4) 00:40:20.947 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.947 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.947 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.947 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.947 fio-3.35 00:40:20.947 Starting 4 threads 00:40:22.315 00:40:22.315 job0: (groupid=0, jobs=1): err= 0: pid=3647433: Mon Dec 16 06:08:55 2024 00:40:22.315 read: IOPS=3093, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1004msec) 00:40:22.315 slat (nsec): min=1102, max=43780k, avg=104432.50, stdev=951278.54 00:40:22.315 clat (usec): min=1407, max=53896, avg=11296.81, stdev=7173.61 00:40:22.315 lat (usec): min=1416, max=53899, avg=11401.24, stdev=7237.41 00:40:22.315 clat percentiles (usec): 00:40:22.315 | 1.00th=[ 1762], 5.00th=[ 2900], 10.00th=[ 5669], 20.00th=[ 8160], 00:40:22.315 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:40:22.315 | 70.00th=[11469], 80.00th=[12649], 90.00th=[15401], 95.00th=[19268], 00:40:22.315 | 99.00th=[52167], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:40:22.315 | 99.99th=[53740] 00:40:22.315 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:40:22.315 slat (nsec): min=1801, max=22096k, avg=129509.14, stdev=872345.29 00:40:22.315 clat (usec): min=501, max=146843, avg=19992.62, stdev=27275.08 00:40:22.315 lat (usec): min=509, max=146852, avg=20122.13, stdev=27429.75 00:40:22.315 clat percentiles (usec): 00:40:22.315 | 1.00th=[ 840], 5.00th=[ 2573], 10.00th=[ 4178], 20.00th=[ 5997], 00:40:22.315 | 30.00th=[ 7570], 40.00th=[ 9372], 50.00th=[ 10552], 60.00th=[ 11469], 00:40:22.315 | 70.00th=[ 16909], 80.00th=[ 21365], 90.00th=[ 51643], 95.00th=[ 89654], 00:40:22.315 | 99.00th=[133694], 99.50th=[139461], 99.90th=[147850], 99.95th=[147850], 00:40:22.315 | 99.99th=[147850] 00:40:22.315 bw ( KiB/s): min= 9584, max=26528, per=26.88%, avg=18056.00, stdev=11981.22, samples=2 00:40:22.315 iops : min= 2396, max= 6632, avg=4514.00, stdev=2995.30, samples=2 00:40:22.315 lat (usec) : 750=0.19%, 1000=0.65% 00:40:22.315 lat (msec) : 2=2.26%, 4=5.96%, 10=33.10%, 20=43.66%, 50=6.68% 00:40:22.315 lat (msec) : 100=5.13%, 250=2.37% 00:40:22.315 cpu : usr=2.29%, sys=4.09%, ctx=407, majf=0, minf=2 00:40:22.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:22.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.315 issued rwts: total=3106,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.315 job1: (groupid=0, jobs=1): err= 0: pid=3647443: Mon Dec 16 06:08:55 2024 00:40:22.315 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:40:22.315 slat (nsec): min=1644, max=17835k, avg=92768.79, stdev=787164.44 00:40:22.315 clat (usec): min=3176, max=51635, avg=12624.36, stdev=6132.89 00:40:22.315 lat (usec): min=3179, max=51640, avg=12717.13, stdev=6207.51 00:40:22.315 clat percentiles (usec): 00:40:22.315 | 1.00th=[ 3490], 5.00th=[ 5276], 10.00th=[ 6783], 20.00th=[ 7767], 00:40:22.315 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[10814], 60.00th=[11600], 00:40:22.315 | 70.00th=[14615], 80.00th=[17695], 90.00th=[20579], 95.00th=[21103], 00:40:22.315 | 99.00th=[31589], 99.50th=[37487], 99.90th=[47449], 99.95th=[47449], 00:40:22.315 | 99.99th=[51643] 00:40:22.315 write: IOPS=3795, BW=14.8MiB/s (15.5MB/s)(15.0MiB/1009msec); 0 zone resets 00:40:22.315 slat (nsec): min=1902, max=41101k, avg=146803.32, stdev=1294548.12 00:40:22.316 clat (usec): min=354, max=115722, avg=21576.39, stdev=25816.43 00:40:22.316 lat (usec): min=380, max=115732, avg=21723.19, stdev=25968.70 00:40:22.316 clat percentiles (msec): 00:40:22.316 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:40:22.316 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:40:22.316 | 70.00th=[ 14], 80.00th=[ 30], 90.00th=[ 68], 95.00th=[ 85], 00:40:22.316 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 116], 99.95th=[ 116], 00:40:22.316 | 99.99th=[ 116] 00:40:22.316 bw ( KiB/s): min= 9136, max=20480, per=22.05%, avg=14808.00, stdev=8021.42, samples=2 00:40:22.316 iops : min= 2284, max= 5120, avg=3702.00, stdev=2005.35, samples=2 00:40:22.316 lat (usec) : 500=0.03% 00:40:22.316 lat (msec) : 2=0.34%, 4=1.82%, 10=46.98%, 20=30.50%, 50=12.52% 00:40:22.316 lat (msec) : 100=6.97%, 250=0.85% 00:40:22.316 cpu : usr=2.48%, sys=5.56%, ctx=255, majf=0, minf=1 00:40:22.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:22.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.316 issued rwts: total=3584,3830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.316 job2: (groupid=0, jobs=1): err= 0: pid=3647458: Mon Dec 16 06:08:55 2024 00:40:22.316 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:40:22.316 slat (nsec): min=1422, max=19670k, avg=149111.02, stdev=1025368.63 00:40:22.316 clat (msec): min=3, max=103, avg=16.06, stdev=12.53 00:40:22.316 lat (msec): min=3, max=103, avg=16.21, stdev=12.64 00:40:22.316 clat percentiles (msec): 00:40:22.316 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:40:22.316 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:40:22.316 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 22], 95.00th=[ 41], 00:40:22.316 | 99.00th=[ 83], 99.50th=[ 94], 99.90th=[ 104], 99.95th=[ 104], 00:40:22.316 | 99.99th=[ 104] 00:40:22.316 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:40:22.316 slat (usec): min=2, max=10442, avg=123.69, stdev=739.01 00:40:22.316 clat (msec): min=2, max=103, avg=19.46, stdev=16.21 00:40:22.316 lat (msec): min=2, max=103, avg=19.58, stdev=16.29 00:40:22.316 clat percentiles (msec): 00:40:22.316 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:40:22.316 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 17], 00:40:22.316 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 42], 95.00th=[ 57], 00:40:22.316 | 99.00th=[ 84], 99.50th=[ 86], 99.90th=[ 94], 99.95th=[ 104], 00:40:22.316 | 99.99th=[ 104] 00:40:22.316 bw ( KiB/s): min=12288, max=16384, per=21.34%, avg=14336.00, stdev=2896.31, samples=2 00:40:22.316 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:40:22.316 lat (msec) : 4=0.49%, 10=23.45%, 20=56.95%, 50=14.16%, 100=4.75% 00:40:22.316 lat (msec) : 250=0.21% 00:40:22.316 cpu : usr=2.69%, sys=4.79%, ctx=297, majf=0, minf=1 00:40:22.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:40:22.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.316 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.316 job3: (groupid=0, jobs=1): err= 0: pid=3647463: Mon Dec 16 06:08:55 2024 00:40:22.316 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:40:22.316 slat (nsec): min=1094, max=53564k, avg=104209.02, stdev=1226682.07 00:40:22.316 clat (msec): min=3, max=128, avg=14.35, stdev=17.60 00:40:22.316 lat (msec): min=3, max=128, avg=14.46, stdev=17.69 00:40:22.316 clat percentiles (msec): 00:40:22.316 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:40:22.316 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:40:22.316 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 18], 95.00th=[ 23], 00:40:22.316 | 99.00th=[ 118], 99.50th=[ 126], 99.90th=[ 129], 99.95th=[ 129], 00:40:22.316 | 99.99th=[ 129] 00:40:22.316 write: IOPS=4897, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1005msec); 0 zone resets 00:40:22.316 slat (nsec): min=1875, max=12508k, avg=96875.36, stdev=624019.86 00:40:22.316 clat (usec): min=3068, max=47503, avg=12451.14, stdev=7027.71 00:40:22.316 lat (usec): min=4724, max=47507, avg=12548.02, stdev=7066.50 00:40:22.316 clat percentiles (usec): 00:40:22.316 | 1.00th=[ 5342], 5.00th=[ 6194], 10.00th=[ 7242], 20.00th=[ 8586], 00:40:22.316 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11207], 00:40:22.316 | 70.00th=[12125], 80.00th=[13960], 90.00th=[19268], 95.00th=[30802], 00:40:22.316 | 99.00th=[41157], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:40:22.316 | 99.99th=[47449] 00:40:22.316 bw ( KiB/s): min=17256, max=21096, per=28.55%, avg=19176.00, stdev=2715.29, samples=2 00:40:22.316 iops : min= 4314, max= 5274, avg=4794.00, stdev=678.82, samples=2 00:40:22.316 lat (msec) : 4=0.57%, 10=43.38%, 20=48.63%, 50=5.76%, 100=0.67% 00:40:22.316 lat (msec) : 250=1.00% 00:40:22.316 cpu : usr=4.78%, sys=5.08%, ctx=318, majf=0, minf=1 00:40:22.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:22.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.316 issued rwts: total=4608,4922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.316 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.316 00:40:22.316 Run status group 0 (all jobs): 00:40:22.316 READ: bw=57.6MiB/s (60.4MB/s), 12.1MiB/s-17.9MiB/s (12.7MB/s-18.8MB/s), io=58.1MiB (60.9MB), run=1004-1009msec 00:40:22.316 WRITE: bw=65.6MiB/s (68.8MB/s), 13.9MiB/s-19.1MiB/s (14.6MB/s-20.1MB/s), io=66.2MiB (69.4MB), run=1004-1009msec 00:40:22.316 00:40:22.316 Disk stats (read/write): 00:40:22.316 nvme0n1: ios=2443/3584, merge=0/0, ticks=22088/72885, in_queue=94973, util=98.10% 00:40:22.316 nvme0n2: ios=3235/3584, merge=0/0, ticks=37179/56961, in_queue=94140, util=99.29% 00:40:22.316 nvme0n3: ios=2608/3071, merge=0/0, ticks=36536/63720, in_queue=100256, util=99.90% 00:40:22.316 nvme0n4: ios=4140/4390, merge=0/0, ticks=36819/33507, in_queue=70326, util=100.00% 00:40:22.316 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:22.316 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3647560 00:40:22.316 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:22.316 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:22.316 [global] 00:40:22.316 thread=1 00:40:22.316 invalidate=1 00:40:22.316 rw=read 00:40:22.316 time_based=1 00:40:22.316 runtime=10 00:40:22.316 ioengine=libaio 00:40:22.316 direct=1 00:40:22.316 bs=4096 00:40:22.316 iodepth=1 00:40:22.316 norandommap=1 00:40:22.316 numjobs=1 00:40:22.316 00:40:22.316 [job0] 00:40:22.316 filename=/dev/nvme0n1 00:40:22.316 [job1] 00:40:22.316 filename=/dev/nvme0n2 00:40:22.316 [job2] 00:40:22.316 filename=/dev/nvme0n3 00:40:22.316 [job3] 00:40:22.316 filename=/dev/nvme0n4 00:40:22.316 Could not set queue depth (nvme0n1) 00:40:22.316 Could not set queue depth (nvme0n2) 00:40:22.316 Could not set queue depth (nvme0n3) 00:40:22.316 Could not set queue depth (nvme0n4) 00:40:22.316 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.316 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.316 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.316 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.316 fio-3.35 00:40:22.316 Starting 4 threads 00:40:25.596 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:25.596 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:25.596 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2338816, buflen=4096 00:40:25.596 fio: pid=3647871, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.596 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.596 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:25.596 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=290816, buflen=4096 00:40:25.596 fio: pid=3647870, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.596 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11587584, buflen=4096 00:40:25.596 fio: pid=3647856, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.596 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.596 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:25.855 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56066048, buflen=4096 00:40:25.855 fio: pid=3647869, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.855 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.855 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:25.855 00:40:25.855 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3647856: Mon Dec 16 06:08:59 2024 00:40:25.855 read: IOPS=914, BW=3656KiB/s (3744kB/s)(11.1MiB/3095msec) 00:40:25.855 slat (usec): min=6, max=15847, avg=13.69, stdev=297.76 00:40:25.855 clat (usec): min=195, max=48641, avg=1070.70, stdev=5829.77 00:40:25.855 lat (usec): min=202, max=64489, avg=1084.39, stdev=5885.01 00:40:25.855 clat percentiles (usec): 00:40:25.855 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:40:25.855 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 227], 00:40:25.855 | 70.00th=[ 231], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 247], 00:40:25.855 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[47973], 00:40:25.855 | 99.99th=[48497] 00:40:25.855 bw ( KiB/s): min= 94, max=16926, per=18.12%, avg=3763.33, stdev=6772.65, samples=6 00:40:25.855 iops : min= 23, max= 4231, avg=940.67, stdev=1693.02, samples=6 00:40:25.855 lat (usec) : 250=95.65%, 500=2.23% 00:40:25.855 lat (msec) : 10=0.04%, 50=2.05% 00:40:25.855 cpu : usr=0.58%, sys=1.39%, ctx=2832, majf=0, minf=1 00:40:25.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 issued rwts: total=2830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.855 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3647869: Mon Dec 16 06:08:59 2024 00:40:25.855 read: IOPS=4141, BW=16.2MiB/s (17.0MB/s)(53.5MiB/3305msec) 00:40:25.855 slat (usec): min=7, max=8826, avg= 9.95, stdev=114.12 00:40:25.855 clat (usec): min=165, max=41215, avg=228.10, stdev=986.03 00:40:25.855 lat (usec): min=179, max=50042, avg=238.00, stdev=1019.29 00:40:25.855 clat percentiles (usec): 00:40:25.855 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 194], 00:40:25.855 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 198], 60.00th=[ 202], 00:40:25.855 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 231], 00:40:25.855 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 1303], 99.95th=[41157], 00:40:25.855 | 99.99th=[41157] 00:40:25.855 bw ( KiB/s): min=15281, max=19352, per=86.51%, avg=17966.17, stdev=1848.88, samples=6 00:40:25.855 iops : min= 3820, max= 4838, avg=4491.50, stdev=462.29, samples=6 00:40:25.855 lat (usec) : 250=98.39%, 500=1.50%, 750=0.01% 00:40:25.855 lat (msec) : 2=0.04%, 10=0.01%, 50=0.06% 00:40:25.855 cpu : usr=2.39%, sys=6.42%, ctx=13694, majf=0, minf=1 00:40:25.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 issued rwts: total=13689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.855 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3647870: Mon Dec 16 06:08:59 2024 00:40:25.855 read: IOPS=24, BW=98.1KiB/s (100kB/s)(284KiB/2894msec) 00:40:25.855 slat (nsec): min=11752, max=37671, avg=23665.29, stdev=2975.55 00:40:25.855 clat (usec): min=447, max=42045, avg=40431.47, stdev=4816.26 00:40:25.855 lat (usec): min=485, max=42070, avg=40455.13, stdev=4814.58 00:40:25.855 clat percentiles (usec): 00:40:25.855 | 1.00th=[ 449], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:25.855 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:25.855 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:25.855 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:25.855 | 99.99th=[42206] 00:40:25.855 bw ( KiB/s): min= 96, max= 104, per=0.48%, avg=99.00, stdev= 4.12, samples=5 00:40:25.855 iops : min= 24, max= 26, avg=24.60, stdev= 0.89, samples=5 00:40:25.855 lat (usec) : 500=1.39% 00:40:25.855 lat (msec) : 50=97.22% 00:40:25.855 cpu : usr=0.14%, sys=0.00%, ctx=72, majf=0, minf=2 00:40:25.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.855 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3647871: Mon Dec 16 06:08:59 2024 00:40:25.855 read: IOPS=210, BW=842KiB/s (863kB/s)(2284KiB/2711msec) 00:40:25.855 slat (nsec): min=7878, max=43223, avg=10879.06, stdev=4108.46 00:40:25.855 clat (usec): min=224, max=41968, avg=4688.65, stdev=12677.21 00:40:25.855 lat (usec): min=233, max=41992, avg=4699.50, stdev=12680.19 00:40:25.855 clat percentiles (usec): 00:40:25.855 | 1.00th=[ 233], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:40:25.855 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 289], 00:40:25.855 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[41157], 95.00th=[41157], 00:40:25.855 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:40:25.855 | 99.99th=[42206] 00:40:25.855 bw ( KiB/s): min= 96, max= 4119, per=4.35%, avg=903.80, stdev=1797.36, samples=5 00:40:25.855 iops : min= 24, max= 1029, avg=225.80, stdev=449.01, samples=5 00:40:25.855 lat (usec) : 250=35.31%, 500=53.50%, 750=0.17% 00:40:25.855 lat (msec) : 50=10.84% 00:40:25.855 cpu : usr=0.07%, sys=0.26%, ctx=573, majf=0, minf=2 00:40:25.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.855 issued rwts: total=572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.855 00:40:25.855 Run status group 0 (all jobs): 00:40:25.855 READ: bw=20.3MiB/s (21.3MB/s), 98.1KiB/s-16.2MiB/s (100kB/s-17.0MB/s), io=67.0MiB (70.3MB), run=2711-3305msec 00:40:25.855 00:40:25.855 Disk stats (read/write): 00:40:25.855 nvme0n1: ios=2828/0, merge=0/0, ticks=2946/0, in_queue=2946, util=93.71% 00:40:25.855 nvme0n2: ios=13729/0, merge=0/0, ticks=3177/0, in_queue=3177, util=98.18% 00:40:25.855 nvme0n3: ios=69/0, merge=0/0, ticks=2791/0, in_queue=2791, util=96.16% 00:40:25.855 nvme0n4: ios=612/0, merge=0/0, ticks=3234/0, in_queue=3234, util=98.72% 00:40:26.114 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.114 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:26.372 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.372 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:26.630 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.630 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:26.889 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.889 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:26.889 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:26.889 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3647560 00:40:26.889 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:26.889 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:27.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:27.147 nvmf hotplug test: fio failed as expected 00:40:27.147 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.406 rmmod nvme_tcp 00:40:27.406 rmmod nvme_fabrics 00:40:27.406 rmmod nvme_keyring 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3645105 ']' 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3645105 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3645105 ']' 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3645105 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3645105 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3645105' 00:40:27.406 killing process with pid 3645105 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3645105 00:40:27.406 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3645105 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:27.664 06:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.563 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.822 00:40:29.822 real 0m24.986s 00:40:29.822 user 1m31.152s 00:40:29.822 sys 0m10.685s 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:29.822 ************************************ 00:40:29.822 END TEST nvmf_fio_target 00:40:29.822 ************************************ 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:29.822 ************************************ 00:40:29.822 START TEST nvmf_bdevio 00:40:29.822 ************************************ 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:29.822 * Looking for test storage... 00:40:29.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.822 --rc genhtml_branch_coverage=1 00:40:29.822 --rc genhtml_function_coverage=1 00:40:29.822 --rc genhtml_legend=1 00:40:29.822 --rc geninfo_all_blocks=1 00:40:29.822 --rc geninfo_unexecuted_blocks=1 00:40:29.822 00:40:29.822 ' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.822 --rc genhtml_branch_coverage=1 00:40:29.822 --rc genhtml_function_coverage=1 00:40:29.822 --rc genhtml_legend=1 00:40:29.822 --rc geninfo_all_blocks=1 00:40:29.822 --rc geninfo_unexecuted_blocks=1 00:40:29.822 00:40:29.822 ' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.822 --rc genhtml_branch_coverage=1 00:40:29.822 --rc genhtml_function_coverage=1 00:40:29.822 --rc genhtml_legend=1 00:40:29.822 --rc geninfo_all_blocks=1 00:40:29.822 --rc geninfo_unexecuted_blocks=1 00:40:29.822 00:40:29.822 ' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:29.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.822 --rc genhtml_branch_coverage=1 00:40:29.822 --rc genhtml_function_coverage=1 00:40:29.822 --rc genhtml_legend=1 00:40:29.822 --rc geninfo_all_blocks=1 00:40:29.822 --rc geninfo_unexecuted_blocks=1 00:40:29.822 00:40:29.822 ' 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:29.822 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:29.823 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:30.081 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.345 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:35.346 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:35.346 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:35.346 Found net devices under 0000:af:00.0: cvl_0_0 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:35.346 Found net devices under 0000:af:00.1: cvl_0_1 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:35.346 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:35.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:35.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:40:35.676 00:40:35.676 --- 10.0.0.2 ping statistics --- 00:40:35.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.676 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:35.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:35.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:40:35.676 00:40:35.676 --- 10.0.0.1 ping statistics --- 00:40:35.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.676 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3652541 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3652541 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3652541 ']' 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:35.676 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.676 [2024-12-16 06:09:09.466559] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:35.676 [2024-12-16 06:09:09.467466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:35.676 [2024-12-16 06:09:09.467500] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.959 [2024-12-16 06:09:09.528403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:35.959 [2024-12-16 06:09:09.567747] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:35.959 [2024-12-16 06:09:09.567787] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:35.959 [2024-12-16 06:09:09.567794] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:35.959 [2024-12-16 06:09:09.567799] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:35.959 [2024-12-16 06:09:09.567804] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:35.959 [2024-12-16 06:09:09.567924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:35.959 [2024-12-16 06:09:09.568031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:35.959 [2024-12-16 06:09:09.568139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:35.959 [2024-12-16 06:09:09.568141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:35.959 [2024-12-16 06:09:09.640211] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:35.959 [2024-12-16 06:09:09.640748] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:35.959 [2024-12-16 06:09:09.641464] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:35.959 [2024-12-16 06:09:09.641578] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:35.959 [2024-12-16 06:09:09.641631] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.959 [2024-12-16 06:09:09.716657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.959 Malloc0 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.959 [2024-12-16 06:09:09.780882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:35.959 { 00:40:35.959 "params": { 00:40:35.959 "name": "Nvme$subsystem", 00:40:35.959 "trtype": "$TEST_TRANSPORT", 00:40:35.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.959 "adrfam": "ipv4", 00:40:35.959 "trsvcid": "$NVMF_PORT", 00:40:35.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.959 "hdgst": ${hdgst:-false}, 00:40:35.959 "ddgst": ${ddgst:-false} 00:40:35.959 }, 00:40:35.959 "method": "bdev_nvme_attach_controller" 00:40:35.959 } 00:40:35.959 EOF 00:40:35.959 )") 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:40:35.959 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:35.959 "params": { 00:40:35.959 "name": "Nvme1", 00:40:35.959 "trtype": "tcp", 00:40:35.959 "traddr": "10.0.0.2", 00:40:35.959 "adrfam": "ipv4", 00:40:35.959 "trsvcid": "4420", 00:40:35.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:35.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:35.959 "hdgst": false, 00:40:35.959 "ddgst": false 00:40:35.959 }, 00:40:35.959 "method": "bdev_nvme_attach_controller" 00:40:35.959 }' 00:40:36.243 [2024-12-16 06:09:09.829331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:36.243 [2024-12-16 06:09:09.829373] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652569 ] 00:40:36.243 [2024-12-16 06:09:09.886950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:36.243 [2024-12-16 06:09:09.927953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.243 [2024-12-16 06:09:09.928050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:36.243 [2024-12-16 06:09:09.928052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.516 I/O targets: 00:40:36.516 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:36.516 00:40:36.516 00:40:36.516 CUnit - A unit testing framework for C - Version 2.1-3 00:40:36.516 http://cunit.sourceforge.net/ 00:40:36.516 00:40:36.516 00:40:36.516 Suite: bdevio tests on: Nvme1n1 00:40:36.516 Test: blockdev write read block ...passed 00:40:36.516 Test: blockdev write zeroes read block ...passed 00:40:36.516 Test: blockdev write zeroes read no split ...passed 00:40:36.516 Test: blockdev write zeroes read split ...passed 00:40:36.516 Test: blockdev write zeroes read split partial ...passed 00:40:36.516 Test: blockdev reset ...[2024-12-16 06:09:10.297679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:36.516 [2024-12-16 06:09:10.297741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x972950 (9): Bad file descriptor 00:40:36.516 [2024-12-16 06:09:10.342708] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:36.516 passed 00:40:36.773 Test: blockdev write read 8 blocks ...passed 00:40:36.773 Test: blockdev write read size > 128k ...passed 00:40:36.773 Test: blockdev write read invalid size ...passed 00:40:36.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:36.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:36.773 Test: blockdev write read max offset ...passed 00:40:36.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:36.773 Test: blockdev writev readv 8 blocks ...passed 00:40:36.773 Test: blockdev writev readv 30 x 1block ...passed 00:40:36.773 Test: blockdev writev readv block ...passed 00:40:36.773 Test: blockdev writev readv size > 128k ...passed 00:40:36.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:36.773 Test: blockdev comparev and writev ...[2024-12-16 06:09:10.593672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.773 [2024-12-16 06:09:10.593706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:36.773 [2024-12-16 06:09:10.593721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.773 [2024-12-16 06:09:10.593729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:36.773 [2024-12-16 06:09:10.594024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.773 [2024-12-16 06:09:10.594035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:36.773 [2024-12-16 06:09:10.594047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.774 [2024-12-16 06:09:10.594055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:36.774 [2024-12-16 06:09:10.594362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.774 [2024-12-16 06:09:10.594373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:36.774 [2024-12-16 06:09:10.594385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.774 [2024-12-16 06:09:10.594391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:36.774 [2024-12-16 06:09:10.594682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.774 [2024-12-16 06:09:10.594695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:36.774 [2024-12-16 06:09:10.594707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:36.774 [2024-12-16 06:09:10.594715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:37.031 passed 00:40:37.031 Test: blockdev nvme passthru rw ...passed 00:40:37.031 Test: blockdev nvme passthru vendor specific ...[2024-12-16 06:09:10.677223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.031 [2024-12-16 06:09:10.677246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:37.031 [2024-12-16 06:09:10.677366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.031 [2024-12-16 06:09:10.677377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:37.031 [2024-12-16 06:09:10.677492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.031 [2024-12-16 06:09:10.677502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:37.031 [2024-12-16 06:09:10.677611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.031 [2024-12-16 06:09:10.677622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:37.031 passed 00:40:37.031 Test: blockdev nvme admin passthru ...passed 00:40:37.031 Test: blockdev copy ...passed 00:40:37.031 00:40:37.031 Run Summary: Type Total Ran Passed Failed Inactive 00:40:37.031 suites 1 1 n/a 0 0 00:40:37.031 tests 23 23 23 0 0 00:40:37.031 asserts 152 152 152 0 n/a 00:40:37.031 00:40:37.031 Elapsed time = 1.075 seconds 00:40:37.031 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.031 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.031 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:37.289 rmmod nvme_tcp 00:40:37.289 rmmod nvme_fabrics 00:40:37.289 rmmod nvme_keyring 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3652541 ']' 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3652541 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3652541 ']' 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3652541 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:37.289 06:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3652541 00:40:37.289 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:40:37.289 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:40:37.289 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3652541' 00:40:37.289 killing process with pid 3652541 00:40:37.289 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3652541 00:40:37.289 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3652541 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:37.547 06:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:39.446 06:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:39.446 00:40:39.446 real 0m9.790s 00:40:39.446 user 0m8.862s 00:40:39.447 sys 0m5.117s 00:40:39.447 06:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:39.447 06:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:39.447 ************************************ 00:40:39.447 END TEST nvmf_bdevio 00:40:39.447 ************************************ 00:40:39.704 06:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:39.704 00:40:39.704 real 4m24.900s 00:40:39.704 user 9m3.936s 00:40:39.704 sys 1m47.757s 00:40:39.704 06:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:39.704 06:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:39.704 ************************************ 00:40:39.704 END TEST nvmf_target_core_interrupt_mode 00:40:39.704 ************************************ 00:40:39.704 06:09:13 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:39.704 06:09:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:39.704 06:09:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:39.704 06:09:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:39.704 ************************************ 00:40:39.704 START TEST nvmf_interrupt 00:40:39.704 ************************************ 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:39.704 * Looking for test storage... 00:40:39.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:39.704 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.705 --rc genhtml_branch_coverage=1 00:40:39.705 --rc genhtml_function_coverage=1 00:40:39.705 --rc genhtml_legend=1 00:40:39.705 --rc geninfo_all_blocks=1 00:40:39.705 --rc geninfo_unexecuted_blocks=1 00:40:39.705 00:40:39.705 ' 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.705 --rc genhtml_branch_coverage=1 00:40:39.705 --rc genhtml_function_coverage=1 00:40:39.705 --rc genhtml_legend=1 00:40:39.705 --rc geninfo_all_blocks=1 00:40:39.705 --rc geninfo_unexecuted_blocks=1 00:40:39.705 00:40:39.705 ' 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.705 --rc genhtml_branch_coverage=1 00:40:39.705 --rc genhtml_function_coverage=1 00:40:39.705 --rc genhtml_legend=1 00:40:39.705 --rc geninfo_all_blocks=1 00:40:39.705 --rc geninfo_unexecuted_blocks=1 00:40:39.705 00:40:39.705 ' 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:39.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.705 --rc genhtml_branch_coverage=1 00:40:39.705 --rc genhtml_function_coverage=1 00:40:39.705 --rc genhtml_legend=1 00:40:39.705 --rc geninfo_all_blocks=1 00:40:39.705 --rc geninfo_unexecuted_blocks=1 00:40:39.705 00:40:39.705 ' 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:39.705 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:39.963 06:09:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.225 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:45.225 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:45.225 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:45.226 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:45.226 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:45.226 Found net devices under 0000:af:00.0: cvl_0_0 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:45.226 Found net devices under 0000:af:00.1: cvl_0_1 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:45.226 06:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:45.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:45.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:40:45.226 00:40:45.226 --- 10.0.0.2 ping statistics --- 00:40:45.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.226 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:45.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:45.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:40:45.226 00:40:45.226 --- 10.0.0.1 ping statistics --- 00:40:45.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.226 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:45.226 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=3656243 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 3656243 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3656243 ']' 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:45.485 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.485 [2024-12-16 06:09:19.159490] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:45.485 [2024-12-16 06:09:19.160426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:45.485 [2024-12-16 06:09:19.160459] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:45.485 [2024-12-16 06:09:19.219172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:45.485 [2024-12-16 06:09:19.258541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.485 [2024-12-16 06:09:19.258578] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.485 [2024-12-16 06:09:19.258585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:45.485 [2024-12-16 06:09:19.258591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:45.485 [2024-12-16 06:09:19.258596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.485 [2024-12-16 06:09:19.258639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:45.485 [2024-12-16 06:09:19.258641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.485 [2024-12-16 06:09:19.320115] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:45.485 [2024-12-16 06:09:19.320494] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:45.485 [2024-12-16 06:09:19.320509] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:45.745 5000+0 records in 00:40:45.745 5000+0 records out 00:40:45.745 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0174769 s, 586 MB/s 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.745 AIO0 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.745 [2024-12-16 06:09:19.447381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.745 [2024-12-16 06:09:19.495681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3656243 0 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3656243 0 idle 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:45.745 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656243 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656243 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:46.004 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3656243 1 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3656243 1 idle 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:46.005 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656272 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656272 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3656314 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3656243 0 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3656243 0 busy 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:46.263 06:09:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656243 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0' 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656243 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:46.263 06:09:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656243 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.55 reactor_0' 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656243 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.55 reactor_0 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3656243 1 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3656243 1 busy 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656272 root 20 0 128.2g 46848 33792 R 93.8 0.1 0:01.35 reactor_1' 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656272 root 20 0 128.2g 46848 33792 R 93.8 0.1 0:01.35 reactor_1 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:47.638 06:09:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3656314 00:40:57.609 Initializing NVMe Controllers 00:40:57.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:57.609 Controller IO queue size 256, less than required. 00:40:57.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:57.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:57.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:57.609 Initialization complete. Launching workers. 00:40:57.609 ======================================================== 00:40:57.609 Latency(us) 00:40:57.609 Device Information : IOPS MiB/s Average min max 00:40:57.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16982.59 66.34 15082.93 2799.33 19173.19 00:40:57.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16566.90 64.71 15460.25 4882.04 18710.62 00:40:57.609 ======================================================== 00:40:57.609 Total : 33549.49 131.05 15269.25 2799.33 19173.19 00:40:57.609 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3656243 0 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3656243 0 idle 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656243 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.21 reactor_0' 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656243 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:20.21 reactor_0 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3656243 1 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3656243 1 idle 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656272 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656272 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:57.609 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:57.610 06:09:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3656243 0 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3656243 0 idle 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:58.985 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656243 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.34 reactor_0' 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656243 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:20.34 reactor_0 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3656243 1 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3656243 1 idle 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3656243 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3656243 -w 256 00:40:59.243 06:09:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3656272 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.05 reactor_1' 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3656272 root 20 0 128.2g 72192 33792 S 0.0 0.1 0:10.05 reactor_1 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:59.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:59.501 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:59.502 rmmod nvme_tcp 00:40:59.502 rmmod nvme_fabrics 00:40:59.502 rmmod nvme_keyring 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 3656243 ']' 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 3656243 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3656243 ']' 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3656243 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:59.502 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3656243 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3656243' 00:40:59.759 killing process with pid 3656243 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3656243 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3656243 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:59.759 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:41:00.018 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:00.018 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:00.018 06:09:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.018 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:00.018 06:09:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.934 06:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:01.934 00:41:01.934 real 0m22.296s 00:41:01.934 user 0m39.337s 00:41:01.934 sys 0m8.251s 00:41:01.934 06:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:01.934 06:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:01.934 ************************************ 00:41:01.934 END TEST nvmf_interrupt 00:41:01.934 ************************************ 00:41:01.934 00:41:01.934 real 34m43.619s 00:41:01.934 user 85m50.914s 00:41:01.934 sys 9m55.548s 00:41:01.934 06:09:35 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:01.934 06:09:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:01.934 ************************************ 00:41:01.934 END TEST nvmf_tcp 00:41:01.934 ************************************ 00:41:01.934 06:09:35 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:41:01.934 06:09:35 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:01.934 06:09:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:01.934 06:09:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:01.934 06:09:35 -- common/autotest_common.sh@10 -- # set +x 00:41:02.193 ************************************ 00:41:02.193 START TEST spdkcli_nvmf_tcp 00:41:02.193 ************************************ 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:02.193 * Looking for test storage... 00:41:02.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:02.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.193 --rc genhtml_branch_coverage=1 00:41:02.193 --rc genhtml_function_coverage=1 00:41:02.193 --rc genhtml_legend=1 00:41:02.193 --rc geninfo_all_blocks=1 00:41:02.193 --rc geninfo_unexecuted_blocks=1 00:41:02.193 00:41:02.193 ' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:02.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.193 --rc genhtml_branch_coverage=1 00:41:02.193 --rc genhtml_function_coverage=1 00:41:02.193 --rc genhtml_legend=1 00:41:02.193 --rc geninfo_all_blocks=1 00:41:02.193 --rc geninfo_unexecuted_blocks=1 00:41:02.193 00:41:02.193 ' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:02.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.193 --rc genhtml_branch_coverage=1 00:41:02.193 --rc genhtml_function_coverage=1 00:41:02.193 --rc genhtml_legend=1 00:41:02.193 --rc geninfo_all_blocks=1 00:41:02.193 --rc geninfo_unexecuted_blocks=1 00:41:02.193 00:41:02.193 ' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:02.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:02.193 --rc genhtml_branch_coverage=1 00:41:02.193 --rc genhtml_function_coverage=1 00:41:02.193 --rc genhtml_legend=1 00:41:02.193 --rc geninfo_all_blocks=1 00:41:02.193 --rc geninfo_unexecuted_blocks=1 00:41:02.193 00:41:02.193 ' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:02.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:02.193 06:09:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3658944 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3658944 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3658944 ']' 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:02.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:02.193 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:02.451 [2024-12-16 06:09:36.048417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:02.451 [2024-12-16 06:09:36.048464] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658944 ] 00:41:02.451 [2024-12-16 06:09:36.103119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:02.451 [2024-12-16 06:09:36.145698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:02.451 [2024-12-16 06:09:36.145703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:02.451 06:09:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:02.451 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:02.451 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:02.451 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:02.451 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:02.451 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:02.451 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:02.451 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:02.451 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:02.451 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:02.451 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:02.451 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:02.451 ' 00:41:05.728 [2024-12-16 06:09:38.938066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:06.660 [2024-12-16 06:09:40.274529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:09.183 [2024-12-16 06:09:42.750130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:11.079 [2024-12-16 06:09:44.924915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:12.974 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:12.974 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:12.974 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:12.974 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:12.974 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:12.974 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:12.974 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:12.974 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:12.974 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:12.974 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:12.974 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:12.974 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:12.974 06:09:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.537 06:09:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:13.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:13.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:13.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:13.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:13.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:13.537 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:13.537 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:13.537 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:13.537 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:13.537 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:13.537 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:13.537 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:13.537 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:13.537 ' 00:41:18.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:18.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:18.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:18.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:18.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:18.793 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:18.793 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:18.793 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:18.793 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:18.793 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:18.793 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:18.793 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:18.793 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:18.793 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3658944 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3658944 ']' 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3658944 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3658944 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:18.793 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:18.794 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3658944' 00:41:18.794 killing process with pid 3658944 00:41:18.794 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3658944 00:41:18.794 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3658944 00:41:19.061 06:09:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:19.061 06:09:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:19.061 06:09:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3658944 ']' 00:41:19.061 06:09:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3658944 00:41:19.061 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3658944 ']' 00:41:19.061 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3658944 00:41:19.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3658944) - No such process 00:41:19.061 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3658944 is not found' 00:41:19.062 Process with pid 3658944 is not found 00:41:19.062 06:09:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:19.062 06:09:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:19.062 06:09:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:19.062 00:41:19.062 real 0m16.942s 00:41:19.062 user 0m36.926s 00:41:19.062 sys 0m0.753s 00:41:19.062 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:19.062 06:09:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:19.062 ************************************ 00:41:19.062 END TEST spdkcli_nvmf_tcp 00:41:19.062 ************************************ 00:41:19.062 06:09:52 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:19.062 06:09:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:19.062 06:09:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:19.062 06:09:52 -- common/autotest_common.sh@10 -- # set +x 00:41:19.062 ************************************ 00:41:19.062 START TEST nvmf_identify_passthru 00:41:19.062 ************************************ 00:41:19.062 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:19.062 * Looking for test storage... 00:41:19.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:19.062 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:19.062 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:41:19.062 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:19.324 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:19.324 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:19.324 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:19.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.324 --rc genhtml_branch_coverage=1 00:41:19.324 --rc genhtml_function_coverage=1 00:41:19.324 --rc genhtml_legend=1 00:41:19.324 --rc geninfo_all_blocks=1 00:41:19.324 --rc geninfo_unexecuted_blocks=1 00:41:19.324 00:41:19.324 ' 00:41:19.324 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:19.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.324 --rc genhtml_branch_coverage=1 00:41:19.324 --rc genhtml_function_coverage=1 00:41:19.324 --rc genhtml_legend=1 00:41:19.324 --rc geninfo_all_blocks=1 00:41:19.324 --rc geninfo_unexecuted_blocks=1 00:41:19.324 00:41:19.324 ' 00:41:19.324 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:19.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.324 --rc genhtml_branch_coverage=1 00:41:19.324 --rc genhtml_function_coverage=1 00:41:19.324 --rc genhtml_legend=1 00:41:19.324 --rc geninfo_all_blocks=1 00:41:19.324 --rc geninfo_unexecuted_blocks=1 00:41:19.324 00:41:19.324 ' 00:41:19.324 06:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:19.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.324 --rc genhtml_branch_coverage=1 00:41:19.324 --rc genhtml_function_coverage=1 00:41:19.324 --rc genhtml_legend=1 00:41:19.324 --rc geninfo_all_blocks=1 00:41:19.324 --rc geninfo_unexecuted_blocks=1 00:41:19.324 00:41:19.324 ' 00:41:19.324 06:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:19.324 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.324 06:09:52 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.325 06:09:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:19.325 06:09:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:19.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:19.325 06:09:52 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:19.325 06:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:19.325 06:09:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:19.325 06:09:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:19.325 06:09:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.325 06:09:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.325 06:09:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:19.325 06:09:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.325 06:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.325 06:09:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:19.325 06:09:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:19.325 06:09:53 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:19.325 06:09:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:24.586 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:24.586 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:24.586 Found net devices under 0000:af:00.0: cvl_0_0 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:24.586 Found net devices under 0000:af:00.1: cvl_0_1 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:24.586 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:24.587 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:24.587 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:24.587 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:24.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:24.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:41:24.845 00:41:24.845 --- 10.0.0.2 ping statistics --- 00:41:24.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.845 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:24.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:24.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:41:24.845 00:41:24.845 --- 10.0.0.1 ping statistics --- 00:41:24.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.845 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:24.845 06:09:58 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:24.845 06:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:24.845 06:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:41:24.845 06:09:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:41:24.845 06:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:24.845 06:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:24.846 06:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:24.846 06:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:24.846 06:09:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:29.030 06:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:41:29.031 06:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:29.031 06:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:29.031 06:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:33.214 06:10:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:33.214 06:10:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:33.214 06:10:06 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:33.214 06:10:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:33.214 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:33.214 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3666025 00:41:33.214 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:33.214 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:33.214 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3666025 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3666025 ']' 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:33.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:33.214 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:33.214 [2024-12-16 06:10:07.054757] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:33.214 [2024-12-16 06:10:07.054801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:33.473 [2024-12-16 06:10:07.117689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:33.473 [2024-12-16 06:10:07.159039] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:33.473 [2024-12-16 06:10:07.159076] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:33.473 [2024-12-16 06:10:07.159084] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:33.473 [2024-12-16 06:10:07.159090] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:33.473 [2024-12-16 06:10:07.159095] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:33.473 [2024-12-16 06:10:07.159143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:33.473 [2024-12-16 06:10:07.159246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:33.473 [2024-12-16 06:10:07.159265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:33.473 [2024-12-16 06:10:07.159269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:41:33.473 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:33.473 INFO: Log level set to 20 00:41:33.473 INFO: Requests: 00:41:33.473 { 00:41:33.473 "jsonrpc": "2.0", 00:41:33.473 "method": "nvmf_set_config", 00:41:33.473 "id": 1, 00:41:33.473 "params": { 00:41:33.473 "admin_cmd_passthru": { 00:41:33.473 "identify_ctrlr": true 00:41:33.473 } 00:41:33.473 } 00:41:33.473 } 00:41:33.473 00:41:33.473 INFO: response: 00:41:33.473 { 00:41:33.473 "jsonrpc": "2.0", 00:41:33.473 "id": 1, 00:41:33.473 "result": true 00:41:33.473 } 00:41:33.473 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.473 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:33.473 INFO: Setting log level to 20 00:41:33.473 INFO: Setting log level to 20 00:41:33.473 INFO: Log level set to 20 00:41:33.473 INFO: Log level set to 20 00:41:33.473 INFO: Requests: 00:41:33.473 { 00:41:33.473 "jsonrpc": "2.0", 00:41:33.473 "method": "framework_start_init", 00:41:33.473 "id": 1 00:41:33.473 } 00:41:33.473 00:41:33.473 INFO: Requests: 00:41:33.473 { 00:41:33.473 "jsonrpc": "2.0", 00:41:33.473 "method": "framework_start_init", 00:41:33.473 "id": 1 00:41:33.473 } 00:41:33.473 00:41:33.473 [2024-12-16 06:10:07.297063] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:33.473 INFO: response: 00:41:33.473 { 00:41:33.473 "jsonrpc": "2.0", 00:41:33.473 "id": 1, 00:41:33.473 "result": true 00:41:33.473 } 00:41:33.473 00:41:33.473 INFO: response: 00:41:33.473 { 00:41:33.473 "jsonrpc": "2.0", 00:41:33.473 "id": 1, 00:41:33.473 "result": true 00:41:33.473 } 00:41:33.473 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.473 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:33.473 INFO: Setting log level to 40 00:41:33.473 INFO: Setting log level to 40 00:41:33.473 INFO: Setting log level to 40 00:41:33.473 [2024-12-16 06:10:07.310512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.473 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:33.473 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:33.732 06:10:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:41:33.732 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.732 06:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:37.021 Nvme0n1 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:37.021 [2024-12-16 06:10:10.209648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:37.021 [ 00:41:37.021 { 00:41:37.021 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:37.021 "subtype": "Discovery", 00:41:37.021 "listen_addresses": [], 00:41:37.021 "allow_any_host": true, 00:41:37.021 "hosts": [] 00:41:37.021 }, 00:41:37.021 { 00:41:37.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:37.021 "subtype": "NVMe", 00:41:37.021 "listen_addresses": [ 00:41:37.021 { 00:41:37.021 "trtype": "TCP", 00:41:37.021 "adrfam": "IPv4", 00:41:37.021 "traddr": "10.0.0.2", 00:41:37.021 "trsvcid": "4420" 00:41:37.021 } 00:41:37.021 ], 00:41:37.021 "allow_any_host": true, 00:41:37.021 "hosts": [], 00:41:37.021 "serial_number": "SPDK00000000000001", 00:41:37.021 "model_number": "SPDK bdev Controller", 00:41:37.021 "max_namespaces": 1, 00:41:37.021 "min_cntlid": 1, 00:41:37.021 "max_cntlid": 65519, 00:41:37.021 "namespaces": [ 00:41:37.021 { 00:41:37.021 "nsid": 1, 00:41:37.021 "bdev_name": "Nvme0n1", 00:41:37.021 "name": "Nvme0n1", 00:41:37.021 "nguid": "4307B8086A0A4B4380D03AA685F127A1", 00:41:37.021 "uuid": "4307b808-6a0a-4b43-80d0-3aa685f127a1" 00:41:37.021 } 00:41:37.021 ] 00:41:37.021 } 00:41:37.021 ] 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:37.021 06:10:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:37.021 rmmod nvme_tcp 00:41:37.021 rmmod nvme_fabrics 00:41:37.021 rmmod nvme_keyring 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 3666025 ']' 00:41:37.021 06:10:10 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 3666025 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3666025 ']' 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3666025 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3666025 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3666025' 00:41:37.021 killing process with pid 3666025 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3666025 00:41:37.021 06:10:10 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3666025 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:38.402 06:10:12 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.402 06:10:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:38.402 06:10:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:40.960 06:10:14 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:40.960 00:41:40.960 real 0m21.508s 00:41:40.960 user 0m27.694s 00:41:40.960 sys 0m5.078s 00:41:40.960 06:10:14 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:40.960 06:10:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.960 ************************************ 00:41:40.960 END TEST nvmf_identify_passthru 00:41:40.960 ************************************ 00:41:40.960 06:10:14 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:40.960 06:10:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:40.960 06:10:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:40.960 06:10:14 -- common/autotest_common.sh@10 -- # set +x 00:41:40.960 ************************************ 00:41:40.960 START TEST nvmf_dif 00:41:40.960 ************************************ 00:41:40.960 06:10:14 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:40.960 * Looking for test storage... 00:41:40.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:40.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.961 --rc genhtml_branch_coverage=1 00:41:40.961 --rc genhtml_function_coverage=1 00:41:40.961 --rc genhtml_legend=1 00:41:40.961 --rc geninfo_all_blocks=1 00:41:40.961 --rc geninfo_unexecuted_blocks=1 00:41:40.961 00:41:40.961 ' 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:40.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.961 --rc genhtml_branch_coverage=1 00:41:40.961 --rc genhtml_function_coverage=1 00:41:40.961 --rc genhtml_legend=1 00:41:40.961 --rc geninfo_all_blocks=1 00:41:40.961 --rc geninfo_unexecuted_blocks=1 00:41:40.961 00:41:40.961 ' 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:40.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.961 --rc genhtml_branch_coverage=1 00:41:40.961 --rc genhtml_function_coverage=1 00:41:40.961 --rc genhtml_legend=1 00:41:40.961 --rc geninfo_all_blocks=1 00:41:40.961 --rc geninfo_unexecuted_blocks=1 00:41:40.961 00:41:40.961 ' 00:41:40.961 06:10:14 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:40.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:40.961 --rc genhtml_branch_coverage=1 00:41:40.961 --rc genhtml_function_coverage=1 00:41:40.961 --rc genhtml_legend=1 00:41:40.961 --rc geninfo_all_blocks=1 00:41:40.961 --rc geninfo_unexecuted_blocks=1 00:41:40.961 00:41:40.961 ' 00:41:40.961 06:10:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:40.961 06:10:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:40.961 06:10:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.961 06:10:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.961 06:10:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.961 06:10:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:40.961 06:10:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:40.961 06:10:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:40.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:40.962 06:10:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:40.962 06:10:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:40.962 06:10:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:40.962 06:10:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:40.962 06:10:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:40.962 06:10:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:40.962 06:10:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:40.962 06:10:14 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:40.962 06:10:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:46.281 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:46.281 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:46.281 Found net devices under 0000:af:00.0: cvl_0_0 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.281 06:10:19 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:46.282 Found net devices under 0000:af:00.1: cvl_0_1 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:46.282 06:10:19 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:46.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:46.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:41:46.282 00:41:46.282 --- 10.0.0.2 ping statistics --- 00:41:46.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.282 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:46.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:46.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:41:46.282 00:41:46.282 --- 10.0.0.1 ping statistics --- 00:41:46.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.282 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:41:46.282 06:10:20 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:48.813 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:48.813 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:41:48.813 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:49.071 06:10:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:49.071 06:10:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=3671391 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 3671391 00:41:49.071 06:10:22 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3671391 ']' 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:49.071 06:10:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:49.071 [2024-12-16 06:10:22.774575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:49.071 [2024-12-16 06:10:22.774618] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:49.071 [2024-12-16 06:10:22.836097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.071 [2024-12-16 06:10:22.874165] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:49.071 [2024-12-16 06:10:22.874208] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:49.071 [2024-12-16 06:10:22.874216] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:49.071 [2024-12-16 06:10:22.874223] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:49.071 [2024-12-16 06:10:22.874228] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:49.071 [2024-12-16 06:10:22.874247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.331 06:10:22 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:49.331 06:10:22 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:41:49.331 06:10:22 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:49.331 06:10:22 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:49.331 06:10:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:49.331 06:10:22 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:49.331 06:10:22 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:49.331 06:10:22 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:49.331 06:10:22 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.331 06:10:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:49.331 [2024-12-16 06:10:23.003094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:49.331 06:10:23 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.331 06:10:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:49.331 06:10:23 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:49.331 06:10:23 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:49.331 06:10:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:49.331 ************************************ 00:41:49.331 START TEST fio_dif_1_default 00:41:49.331 ************************************ 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:49.331 bdev_null0 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:49.331 [2024-12-16 06:10:23.075421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:49.331 { 00:41:49.331 "params": { 00:41:49.331 "name": "Nvme$subsystem", 00:41:49.331 "trtype": "$TEST_TRANSPORT", 00:41:49.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.331 "adrfam": "ipv4", 00:41:49.331 "trsvcid": "$NVMF_PORT", 00:41:49.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.331 "hdgst": ${hdgst:-false}, 00:41:49.331 "ddgst": ${ddgst:-false} 00:41:49.331 }, 00:41:49.331 "method": "bdev_nvme_attach_controller" 00:41:49.331 } 00:41:49.331 EOF 00:41:49.331 )") 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:49.331 "params": { 00:41:49.331 "name": "Nvme0", 00:41:49.331 "trtype": "tcp", 00:41:49.331 "traddr": "10.0.0.2", 00:41:49.331 "adrfam": "ipv4", 00:41:49.331 "trsvcid": "4420", 00:41:49.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:49.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:49.331 "hdgst": false, 00:41:49.331 "ddgst": false 00:41:49.331 }, 00:41:49.331 "method": "bdev_nvme_attach_controller" 00:41:49.331 }' 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:49.331 06:10:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.896 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:49.896 fio-3.35 00:41:49.896 Starting 1 thread 00:42:02.096 00:42:02.096 filename0: (groupid=0, jobs=1): err= 0: pid=3671677: Mon Dec 16 06:10:33 2024 00:42:02.096 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10024msec) 00:42:02.096 slat (nsec): min=5929, max=46819, avg=6297.63, stdev=1548.74 00:42:02.096 clat (usec): min=40903, max=45239, avg=41573.03, stdev=543.00 00:42:02.097 lat (usec): min=40909, max=45285, avg=41579.33, stdev=543.31 00:42:02.097 clat percentiles (usec): 00:42:02.097 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:02.097 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:42:02.097 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:02.097 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:42:02.097 | 99.99th=[45351] 00:42:02.097 bw ( KiB/s): min= 352, max= 416, per=99.82%, avg=384.00, stdev=10.38, samples=20 00:42:02.097 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:42:02.097 lat (msec) : 50=100.00% 00:42:02.097 cpu : usr=92.87%, sys=6.87%, ctx=16, majf=0, minf=0 00:42:02.097 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.097 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.097 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:02.097 00:42:02.097 Run status group 0 (all jobs): 00:42:02.097 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10024-10024msec 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 00:42:02.097 real 0m11.060s 00:42:02.097 user 0m16.222s 00:42:02.097 sys 0m0.980s 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 ************************************ 00:42:02.097 END TEST fio_dif_1_default 00:42:02.097 ************************************ 00:42:02.097 06:10:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:02.097 06:10:34 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:02.097 06:10:34 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 ************************************ 00:42:02.097 START TEST fio_dif_1_multi_subsystems 00:42:02.097 ************************************ 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 bdev_null0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 [2024-12-16 06:10:34.207696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 bdev_null1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:02.097 { 00:42:02.097 "params": { 00:42:02.097 "name": "Nvme$subsystem", 00:42:02.097 "trtype": "$TEST_TRANSPORT", 00:42:02.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.097 "adrfam": "ipv4", 00:42:02.097 "trsvcid": "$NVMF_PORT", 00:42:02.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.097 "hdgst": ${hdgst:-false}, 00:42:02.097 "ddgst": ${ddgst:-false} 00:42:02.097 }, 00:42:02.097 "method": "bdev_nvme_attach_controller" 00:42:02.097 } 00:42:02.097 EOF 00:42:02.097 )") 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:02.097 { 00:42:02.097 "params": { 00:42:02.097 "name": "Nvme$subsystem", 00:42:02.097 "trtype": "$TEST_TRANSPORT", 00:42:02.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.097 "adrfam": "ipv4", 00:42:02.097 "trsvcid": "$NVMF_PORT", 00:42:02.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.097 "hdgst": ${hdgst:-false}, 00:42:02.097 "ddgst": ${ddgst:-false} 00:42:02.097 }, 00:42:02.097 "method": "bdev_nvme_attach_controller" 00:42:02.097 } 00:42:02.097 EOF 00:42:02.097 )") 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:02.097 "params": { 00:42:02.097 "name": "Nvme0", 00:42:02.097 "trtype": "tcp", 00:42:02.097 "traddr": "10.0.0.2", 00:42:02.097 "adrfam": "ipv4", 00:42:02.097 "trsvcid": "4420", 00:42:02.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:02.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:02.097 "hdgst": false, 00:42:02.097 "ddgst": false 00:42:02.097 }, 00:42:02.097 "method": "bdev_nvme_attach_controller" 00:42:02.097 },{ 00:42:02.097 "params": { 00:42:02.097 "name": "Nvme1", 00:42:02.097 "trtype": "tcp", 00:42:02.097 "traddr": "10.0.0.2", 00:42:02.097 "adrfam": "ipv4", 00:42:02.097 "trsvcid": "4420", 00:42:02.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:02.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:02.097 "hdgst": false, 00:42:02.097 "ddgst": false 00:42:02.097 }, 00:42:02.097 "method": "bdev_nvme_attach_controller" 00:42:02.097 }' 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:02.097 06:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.097 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:02.097 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:02.097 fio-3.35 00:42:02.097 Starting 2 threads 00:42:12.082 00:42:12.082 filename0: (groupid=0, jobs=1): err= 0: pid=3673589: Mon Dec 16 06:10:45 2024 00:42:12.082 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10011msec) 00:42:12.082 slat (nsec): min=5941, max=43645, avg=8029.40, stdev=3160.11 00:42:12.082 clat (usec): min=407, max=42067, avg=40670.95, stdev=3650.05 00:42:12.082 lat (usec): min=413, max=42089, avg=40678.98, stdev=3650.11 00:42:12.082 clat percentiles (usec): 00:42:12.082 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:12.082 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:12.082 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:12.082 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:12.082 | 99.99th=[42206] 00:42:12.082 bw ( KiB/s): min= 384, max= 416, per=40.44%, avg=392.00, stdev=14.22, samples=20 00:42:12.082 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:42:12.082 lat (usec) : 500=0.81% 00:42:12.082 lat (msec) : 50=99.19% 00:42:12.082 cpu : usr=97.02%, sys=2.74%, ctx=23, majf=0, minf=166 00:42:12.082 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:12.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.082 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:12.082 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:12.082 filename1: (groupid=0, jobs=1): err= 0: pid=3673590: Mon Dec 16 06:10:45 2024 00:42:12.082 read: IOPS=143, BW=574KiB/s (588kB/s)(5744KiB/10010msec) 00:42:12.082 slat (nsec): min=5887, max=28686, avg=7384.35, stdev=2450.72 00:42:12.082 clat (usec): min=411, max=42615, avg=27861.33, stdev=19184.20 00:42:12.082 lat (usec): min=417, max=42622, avg=27868.71, stdev=19183.92 00:42:12.082 clat percentiles (usec): 00:42:12.082 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 441], 00:42:12.082 | 30.00th=[ 570], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:12.082 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:42:12.082 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:12.082 | 99.99th=[42730] 00:42:12.082 bw ( KiB/s): min= 384, max= 768, per=59.16%, avg=572.80, stdev=183.06, samples=20 00:42:12.082 iops : min= 96, max= 192, avg=143.20, stdev=45.77, samples=20 00:42:12.082 lat (usec) : 500=27.99%, 750=4.87% 00:42:12.082 lat (msec) : 50=67.13% 00:42:12.082 cpu : usr=96.72%, sys=3.05%, ctx=13, majf=0, minf=49 00:42:12.082 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:12.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:12.082 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:12.082 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:12.082 00:42:12.082 Run status group 0 (all jobs): 00:42:12.082 READ: bw=967KiB/s (990kB/s), 393KiB/s-574KiB/s (403kB/s-588kB/s), io=9680KiB (9912kB), run=10010-10011msec 00:42:12.082 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:12.082 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:12.082 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:12.082 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:12.082 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:12.082 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 00:42:12.083 real 0m11.296s 00:42:12.083 user 0m26.338s 00:42:12.083 sys 0m0.877s 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 ************************************ 00:42:12.083 END TEST fio_dif_1_multi_subsystems 00:42:12.083 ************************************ 00:42:12.083 06:10:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:12.083 06:10:45 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:12.083 06:10:45 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 ************************************ 00:42:12.083 START TEST fio_dif_rand_params 00:42:12.083 ************************************ 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 bdev_null0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:12.083 [2024-12-16 06:10:45.583772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:12.083 { 00:42:12.083 "params": { 00:42:12.083 "name": "Nvme$subsystem", 00:42:12.083 "trtype": "$TEST_TRANSPORT", 00:42:12.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:12.083 "adrfam": "ipv4", 00:42:12.083 "trsvcid": "$NVMF_PORT", 00:42:12.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:12.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:12.083 "hdgst": ${hdgst:-false}, 00:42:12.083 "ddgst": ${ddgst:-false} 00:42:12.083 }, 00:42:12.083 "method": "bdev_nvme_attach_controller" 00:42:12.083 } 00:42:12.083 EOF 00:42:12.083 )") 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:12.083 "params": { 00:42:12.083 "name": "Nvme0", 00:42:12.083 "trtype": "tcp", 00:42:12.083 "traddr": "10.0.0.2", 00:42:12.083 "adrfam": "ipv4", 00:42:12.083 "trsvcid": "4420", 00:42:12.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:12.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:12.083 "hdgst": false, 00:42:12.083 "ddgst": false 00:42:12.083 }, 00:42:12.083 "method": "bdev_nvme_attach_controller" 00:42:12.083 }' 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:12.083 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:12.084 06:10:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:12.341 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:12.341 ... 00:42:12.341 fio-3.35 00:42:12.341 Starting 3 threads 00:42:17.602 00:42:17.602 filename0: (groupid=0, jobs=1): err= 0: pid=3675379: Mon Dec 16 06:10:51 2024 00:42:17.602 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(196MiB/5006msec) 00:42:17.602 slat (nsec): min=6141, max=31161, avg=11075.18, stdev=2164.58 00:42:17.602 clat (usec): min=4087, max=50529, avg=9546.63, stdev=6685.89 00:42:17.602 lat (usec): min=4096, max=50541, avg=9557.71, stdev=6685.88 00:42:17.602 clat percentiles (usec): 00:42:17.602 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 7504], 00:42:17.602 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:42:17.602 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10814], 00:42:17.602 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[50594], 00:42:17.602 | 99.99th=[50594] 00:42:17.602 bw ( KiB/s): min=35328, max=51456, per=32.43%, avg=40140.80, stdev=4516.71, samples=10 00:42:17.602 iops : min= 276, max= 402, avg=313.60, stdev=35.29, samples=10 00:42:17.602 lat (msec) : 10=87.97%, 20=9.17%, 50=2.74%, 100=0.13% 00:42:17.602 cpu : usr=94.01%, sys=5.71%, ctx=11, majf=0, minf=70 00:42:17.602 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:17.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:17.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:17.602 issued rwts: total=1571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:17.602 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:17.602 filename0: (groupid=0, jobs=1): err= 0: pid=3675380: Mon Dec 16 06:10:51 2024 00:42:17.602 read: IOPS=334, BW=41.9MiB/s (43.9MB/s)(209MiB/5002msec) 00:42:17.602 slat (nsec): min=6084, max=25025, avg=10875.24, stdev=2188.51 00:42:17.602 clat (usec): min=3137, max=51110, avg=8944.68, stdev=4817.87 00:42:17.602 lat (usec): min=3143, max=51121, avg=8955.56, stdev=4818.16 00:42:17.602 clat percentiles (usec): 00:42:17.602 | 1.00th=[ 3556], 5.00th=[ 4621], 10.00th=[ 6063], 20.00th=[ 6980], 00:42:17.602 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:42:17.602 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:42:17.602 | 99.00th=[46924], 99.50th=[48497], 99.90th=[51119], 99.95th=[51119], 00:42:17.602 | 99.99th=[51119] 00:42:17.602 bw ( KiB/s): min=32000, max=49408, per=34.34%, avg=42496.00, stdev=5296.17, samples=9 00:42:17.602 iops : min= 250, max= 386, avg=332.00, stdev=41.38, samples=9 00:42:17.602 lat (msec) : 4=3.40%, 10=76.84%, 20=18.51%, 50=1.07%, 100=0.18% 00:42:17.602 cpu : usr=94.06%, sys=5.64%, ctx=16, majf=0, minf=47 00:42:17.602 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:17.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:17.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:17.602 issued rwts: total=1675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:17.602 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:17.602 filename0: (groupid=0, jobs=1): err= 0: pid=3675381: Mon Dec 16 06:10:51 2024 00:42:17.602 read: IOPS=319, BW=39.9MiB/s (41.9MB/s)(200MiB/5014msec) 00:42:17.602 slat (nsec): min=6117, max=24699, avg=11254.63, stdev=2144.48 00:42:17.602 clat (usec): min=3273, max=50708, avg=9374.35, stdev=4890.25 00:42:17.602 lat (usec): min=3279, max=50720, avg=9385.60, stdev=4890.35 00:42:17.602 clat percentiles (usec): 00:42:17.602 | 1.00th=[ 3523], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 7308], 00:42:17.602 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:42:17.602 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11076], 95.00th=[11600], 00:42:17.602 | 99.00th=[46924], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:42:17.602 | 99.99th=[50594] 00:42:17.602 bw ( KiB/s): min=35328, max=45312, per=33.07%, avg=40934.40, stdev=3371.37, samples=10 00:42:17.602 iops : min= 276, max= 354, avg=319.80, stdev=26.34, samples=10 00:42:17.602 lat (msec) : 4=3.25%, 10=65.42%, 20=30.02%, 50=1.12%, 100=0.19% 00:42:17.602 cpu : usr=93.88%, sys=5.80%, ctx=9, majf=0, minf=24 00:42:17.602 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:17.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:17.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:17.602 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:17.602 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:17.602 00:42:17.602 Run status group 0 (all jobs): 00:42:17.602 READ: bw=121MiB/s (127MB/s), 39.2MiB/s-41.9MiB/s (41.1MB/s-43.9MB/s), io=606MiB (635MB), run=5002-5014msec 00:42:17.860 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 bdev_null0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 [2024-12-16 06:10:51.582312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 bdev_null1 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 bdev_null2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:17.861 { 00:42:17.861 "params": { 00:42:17.861 "name": "Nvme$subsystem", 00:42:17.861 "trtype": "$TEST_TRANSPORT", 00:42:17.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:17.861 "adrfam": "ipv4", 00:42:17.861 "trsvcid": "$NVMF_PORT", 00:42:17.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:17.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:17.861 "hdgst": ${hdgst:-false}, 00:42:17.861 "ddgst": ${ddgst:-false} 00:42:17.861 }, 00:42:17.861 "method": "bdev_nvme_attach_controller" 00:42:17.861 } 00:42:17.861 EOF 00:42:17.861 )") 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:17.861 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:17.861 { 00:42:17.861 "params": { 00:42:17.861 "name": "Nvme$subsystem", 00:42:17.861 "trtype": "$TEST_TRANSPORT", 00:42:17.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:17.862 "adrfam": "ipv4", 00:42:17.862 "trsvcid": "$NVMF_PORT", 00:42:17.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:17.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:17.862 "hdgst": ${hdgst:-false}, 00:42:17.862 "ddgst": ${ddgst:-false} 00:42:17.862 }, 00:42:17.862 "method": "bdev_nvme_attach_controller" 00:42:17.862 } 00:42:17.862 EOF 00:42:17.862 )") 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:17.862 { 00:42:17.862 "params": { 00:42:17.862 "name": "Nvme$subsystem", 00:42:17.862 "trtype": "$TEST_TRANSPORT", 00:42:17.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:17.862 "adrfam": "ipv4", 00:42:17.862 "trsvcid": "$NVMF_PORT", 00:42:17.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:17.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:17.862 "hdgst": ${hdgst:-false}, 00:42:17.862 "ddgst": ${ddgst:-false} 00:42:17.862 }, 00:42:17.862 "method": "bdev_nvme_attach_controller" 00:42:17.862 } 00:42:17.862 EOF 00:42:17.862 )") 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:17.862 "params": { 00:42:17.862 "name": "Nvme0", 00:42:17.862 "trtype": "tcp", 00:42:17.862 "traddr": "10.0.0.2", 00:42:17.862 "adrfam": "ipv4", 00:42:17.862 "trsvcid": "4420", 00:42:17.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:17.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:17.862 "hdgst": false, 00:42:17.862 "ddgst": false 00:42:17.862 }, 00:42:17.862 "method": "bdev_nvme_attach_controller" 00:42:17.862 },{ 00:42:17.862 "params": { 00:42:17.862 "name": "Nvme1", 00:42:17.862 "trtype": "tcp", 00:42:17.862 "traddr": "10.0.0.2", 00:42:17.862 "adrfam": "ipv4", 00:42:17.862 "trsvcid": "4420", 00:42:17.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:17.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:17.862 "hdgst": false, 00:42:17.862 "ddgst": false 00:42:17.862 }, 00:42:17.862 "method": "bdev_nvme_attach_controller" 00:42:17.862 },{ 00:42:17.862 "params": { 00:42:17.862 "name": "Nvme2", 00:42:17.862 "trtype": "tcp", 00:42:17.862 "traddr": "10.0.0.2", 00:42:17.862 "adrfam": "ipv4", 00:42:17.862 "trsvcid": "4420", 00:42:17.862 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:17.862 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:17.862 "hdgst": false, 00:42:17.862 "ddgst": false 00:42:17.862 }, 00:42:17.862 "method": "bdev_nvme_attach_controller" 00:42:17.862 }' 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:17.862 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:18.149 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:18.149 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:18.149 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:18.149 06:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:18.413 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:18.413 ... 00:42:18.413 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:18.413 ... 00:42:18.413 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:18.413 ... 00:42:18.413 fio-3.35 00:42:18.413 Starting 24 threads 00:42:30.600 00:42:30.600 filename0: (groupid=0, jobs=1): err= 0: pid=3676614: Mon Dec 16 06:11:03 2024 00:42:30.600 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.6MiB/10021msec) 00:42:30.600 slat (nsec): min=7350, max=86844, avg=18180.33, stdev=12191.49 00:42:30.600 clat (usec): min=16905, max=38713, avg=30286.11, stdev=1021.47 00:42:30.600 lat (usec): min=16914, max=38743, avg=30304.29, stdev=1021.05 00:42:30.600 clat percentiles (usec): 00:42:30.600 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.600 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.600 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.600 | 99.00th=[31327], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:42:30.600 | 99.99th=[38536] 00:42:30.600 bw ( KiB/s): min= 2043, max= 2176, per=4.17%, avg=2101.58, stdev=64.68, samples=19 00:42:30.600 iops : min= 510, max= 544, avg=525.32, stdev=16.17, samples=19 00:42:30.600 lat (msec) : 20=0.30%, 50=99.70% 00:42:30.600 cpu : usr=98.37%, sys=1.23%, ctx=15, majf=0, minf=9 00:42:30.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.600 filename0: (groupid=0, jobs=1): err= 0: pid=3676615: Mon Dec 16 06:11:03 2024 00:42:30.600 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10003msec) 00:42:30.600 slat (nsec): min=6131, max=79943, avg=24331.03, stdev=7974.09 00:42:30.600 clat (usec): min=10594, max=61114, avg=30280.60, stdev=2150.75 00:42:30.600 lat (usec): min=10614, max=61131, avg=30304.93, stdev=2149.70 00:42:30.600 clat percentiles (usec): 00:42:30.600 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.600 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.600 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.600 | 99.00th=[31327], 99.50th=[31589], 99.90th=[61080], 99.95th=[61080], 00:42:30.600 | 99.99th=[61080] 00:42:30.600 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2088.05, stdev=74.01, samples=19 00:42:30.600 iops : min= 480, max= 544, avg=521.89, stdev=18.58, samples=19 00:42:30.600 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:42:30.600 cpu : usr=98.41%, sys=1.22%, ctx=5, majf=0, minf=9 00:42:30.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:30.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.600 filename0: (groupid=0, jobs=1): err= 0: pid=3676616: Mon Dec 16 06:11:03 2024 00:42:30.600 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10002msec) 00:42:30.600 slat (nsec): min=6610, max=79564, avg=24207.52, stdev=7972.70 00:42:30.600 clat (usec): min=10631, max=63444, avg=30270.18, stdev=2127.00 00:42:30.600 lat (usec): min=10644, max=63463, avg=30294.39, stdev=2126.29 00:42:30.600 clat percentiles (usec): 00:42:30.600 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.600 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.600 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.600 | 99.00th=[31327], 99.50th=[31589], 99.90th=[60031], 99.95th=[60031], 00:42:30.600 | 99.99th=[63701] 00:42:30.600 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.37, stdev=74.11, samples=19 00:42:30.600 iops : min= 480, max= 544, avg=522.05, stdev=18.48, samples=19 00:42:30.600 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:42:30.600 cpu : usr=98.54%, sys=1.06%, ctx=17, majf=0, minf=9 00:42:30.600 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.600 filename0: (groupid=0, jobs=1): err= 0: pid=3676617: Mon Dec 16 06:11:03 2024 00:42:30.600 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.5MiB/10001msec) 00:42:30.600 slat (nsec): min=6116, max=80091, avg=22072.01, stdev=8612.74 00:42:30.600 clat (usec): min=17466, max=47477, avg=30281.77, stdev=1652.11 00:42:30.600 lat (usec): min=17494, max=47518, avg=30303.85, stdev=1651.84 00:42:30.600 clat percentiles (usec): 00:42:30.600 | 1.00th=[25297], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:42:30.600 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.600 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.600 | 99.00th=[35390], 99.50th=[36439], 99.90th=[47449], 99.95th=[47449], 00:42:30.600 | 99.99th=[47449] 00:42:30.600 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2096.89, stdev=61.95, samples=19 00:42:30.600 iops : min= 510, max= 544, avg=524.11, stdev=15.43, samples=19 00:42:30.600 lat (msec) : 20=0.61%, 50=99.39% 00:42:30.600 cpu : usr=98.50%, sys=1.13%, ctx=19, majf=0, minf=9 00:42:30.600 IO depths : 1=5.7%, 2=11.7%, 4=24.2%, 8=51.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:30.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.600 filename0: (groupid=0, jobs=1): err= 0: pid=3676618: Mon Dec 16 06:11:03 2024 00:42:30.600 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:42:30.600 slat (nsec): min=9297, max=71389, avg=28414.66, stdev=9805.64 00:42:30.600 clat (usec): min=15532, max=46741, avg=30286.50, stdev=1029.45 00:42:30.600 lat (usec): min=15541, max=46775, avg=30314.92, stdev=1028.67 00:42:30.600 clat percentiles (usec): 00:42:30.600 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.600 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.600 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.600 | 99.00th=[31327], 99.50th=[32113], 99.90th=[42730], 99.95th=[42730], 00:42:30.600 | 99.99th=[46924] 00:42:30.600 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2094.63, stdev=63.31, samples=19 00:42:30.600 iops : min= 510, max= 544, avg=523.58, stdev=15.81, samples=19 00:42:30.600 lat (msec) : 20=0.04%, 50=99.96% 00:42:30.600 cpu : usr=98.41%, sys=1.21%, ctx=16, majf=0, minf=9 00:42:30.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.600 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.600 filename0: (groupid=0, jobs=1): err= 0: pid=3676619: Mon Dec 16 06:11:03 2024 00:42:30.600 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10003msec) 00:42:30.600 slat (nsec): min=5972, max=62810, avg=28992.80, stdev=9268.56 00:42:30.600 clat (usec): min=14538, max=43927, avg=30231.39, stdev=737.46 00:42:30.600 lat (usec): min=14547, max=43944, avg=30260.38, stdev=737.53 00:42:30.600 clat percentiles (usec): 00:42:30.600 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:42:30.600 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:42:30.600 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.600 | 99.00th=[31327], 99.50th=[31851], 99.90th=[35914], 99.95th=[35914], 00:42:30.600 | 99.99th=[43779] 00:42:30.600 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2094.84, stdev=63.15, samples=19 00:42:30.600 iops : min= 510, max= 544, avg=523.63, stdev=15.77, samples=19 00:42:30.600 lat (msec) : 20=0.04%, 50=99.96% 00:42:30.600 cpu : usr=98.54%, sys=1.09%, ctx=14, majf=0, minf=9 00:42:30.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename0: (groupid=0, jobs=1): err= 0: pid=3676620: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:42:30.601 slat (nsec): min=7836, max=63110, avg=28554.96, stdev=9675.70 00:42:30.601 clat (usec): min=21866, max=33331, avg=30218.28, stdev=583.94 00:42:30.601 lat (usec): min=21882, max=33358, avg=30246.83, stdev=584.72 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.601 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[30802], 00:42:30.601 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33162], 99.95th=[33424], 00:42:30.601 | 99.99th=[33424] 00:42:30.601 bw ( KiB/s): min= 2043, max= 2176, per=4.17%, avg=2101.37, stdev=64.86, samples=19 00:42:30.601 iops : min= 510, max= 544, avg=525.26, stdev=16.21, samples=19 00:42:30.601 lat (msec) : 50=100.00% 00:42:30.601 cpu : usr=98.34%, sys=1.28%, ctx=14, majf=0, minf=9 00:42:30.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename0: (groupid=0, jobs=1): err= 0: pid=3676621: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.4MiB/10001msec) 00:42:30.601 slat (nsec): min=7972, max=61773, avg=26774.17, stdev=10625.10 00:42:30.601 clat (usec): min=20291, max=55481, avg=30332.21, stdev=1360.71 00:42:30.601 lat (usec): min=20300, max=55506, avg=30358.98, stdev=1359.98 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.601 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.601 | 99.00th=[31589], 99.50th=[35914], 99.90th=[52167], 99.95th=[52167], 00:42:30.601 | 99.99th=[55313] 00:42:30.601 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2094.63, stdev=76.34, samples=19 00:42:30.601 iops : min= 480, max= 544, avg=523.58, stdev=19.07, samples=19 00:42:30.601 lat (msec) : 50=99.69%, 100=0.31% 00:42:30.601 cpu : usr=98.58%, sys=1.04%, ctx=14, majf=0, minf=9 00:42:30.601 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename1: (groupid=0, jobs=1): err= 0: pid=3676622: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:42:30.601 slat (nsec): min=9366, max=73733, avg=28830.92, stdev=8842.09 00:42:30.601 clat (usec): min=15651, max=46236, avg=30262.91, stdev=1028.44 00:42:30.601 lat (usec): min=15661, max=46252, avg=30291.74, stdev=1028.16 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[29754], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:42:30.601 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.601 | 99.00th=[31327], 99.50th=[31851], 99.90th=[42730], 99.95th=[43254], 00:42:30.601 | 99.99th=[46400] 00:42:30.601 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2094.63, stdev=63.31, samples=19 00:42:30.601 iops : min= 510, max= 544, avg=523.58, stdev=15.81, samples=19 00:42:30.601 lat (msec) : 20=0.04%, 50=99.96% 00:42:30.601 cpu : usr=98.47%, sys=1.15%, ctx=18, majf=0, minf=9 00:42:30.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename1: (groupid=0, jobs=1): err= 0: pid=3676623: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10003msec) 00:42:30.601 slat (nsec): min=7409, max=96086, avg=42372.68, stdev=17574.90 00:42:30.601 clat (usec): min=10236, max=61149, avg=30113.50, stdev=2323.74 00:42:30.601 lat (usec): min=10243, max=61178, avg=30155.87, stdev=2323.47 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:42:30.601 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:42:30.601 | 99.00th=[31327], 99.50th=[36439], 99.90th=[61080], 99.95th=[61080], 00:42:30.601 | 99.99th=[61080] 00:42:30.601 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2088.05, stdev=74.15, samples=19 00:42:30.601 iops : min= 480, max= 544, avg=521.89, stdev=18.60, samples=19 00:42:30.601 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:42:30.601 cpu : usr=98.60%, sys=1.01%, ctx=14, majf=0, minf=9 00:42:30.601 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename1: (groupid=0, jobs=1): err= 0: pid=3676624: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=523, BW=2094KiB/s (2145kB/s)(20.5MiB/10004msec) 00:42:30.601 slat (nsec): min=7378, max=84617, avg=26976.07, stdev=9210.21 00:42:30.601 clat (usec): min=18759, max=57692, avg=30301.32, stdev=1375.49 00:42:30.601 lat (usec): min=18775, max=57717, avg=30328.30, stdev=1375.16 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.601 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[30802], 00:42:30.601 | 99.00th=[31851], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:42:30.601 | 99.99th=[57934] 00:42:30.601 bw ( KiB/s): min= 1971, max= 2176, per=4.15%, avg=2090.58, stdev=68.69, samples=19 00:42:30.601 iops : min= 492, max= 544, avg=522.53, stdev=17.23, samples=19 00:42:30.601 lat (msec) : 20=0.19%, 50=99.77%, 100=0.04% 00:42:30.601 cpu : usr=98.23%, sys=1.39%, ctx=14, majf=0, minf=9 00:42:30.601 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename1: (groupid=0, jobs=1): err= 0: pid=3676625: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:42:30.601 slat (nsec): min=7369, max=80642, avg=23337.96, stdev=7791.74 00:42:30.601 clat (usec): min=17362, max=39369, avg=30293.99, stdev=928.89 00:42:30.601 lat (usec): min=17382, max=39386, avg=30317.33, stdev=927.83 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.601 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.601 | 99.00th=[31327], 99.50th=[31589], 99.90th=[39060], 99.95th=[39584], 00:42:30.601 | 99.99th=[39584] 00:42:30.601 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2094.11, stdev=63.18, samples=19 00:42:30.601 iops : min= 510, max= 544, avg=523.37, stdev=15.76, samples=19 00:42:30.601 lat (msec) : 20=0.30%, 50=99.70% 00:42:30.601 cpu : usr=98.39%, sys=1.24%, ctx=14, majf=0, minf=9 00:42:30.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename1: (groupid=0, jobs=1): err= 0: pid=3676626: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.7MiB/10002msec) 00:42:30.601 slat (nsec): min=4476, max=67747, avg=18212.70, stdev=9963.64 00:42:30.601 clat (usec): min=8289, max=39032, avg=30005.26, stdev=2270.71 00:42:30.601 lat (usec): min=8298, max=39040, avg=30023.47, stdev=2271.51 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[14877], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.601 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.601 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32113], 99.95th=[39060], 00:42:30.601 | 99.99th=[39060] 00:42:30.601 bw ( KiB/s): min= 2043, max= 2528, per=4.22%, avg=2126.89, stdev=116.25, samples=19 00:42:30.601 iops : min= 510, max= 632, avg=531.68, stdev=29.09, samples=19 00:42:30.601 lat (msec) : 10=0.13%, 20=1.66%, 50=98.21% 00:42:30.601 cpu : usr=98.43%, sys=1.22%, ctx=15, majf=0, minf=9 00:42:30.601 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.9%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:30.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.601 issued rwts: total=5308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.601 filename1: (groupid=0, jobs=1): err= 0: pid=3676627: Mon Dec 16 06:11:03 2024 00:42:30.601 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10020msec) 00:42:30.601 slat (nsec): min=7830, max=66104, avg=27022.07, stdev=10734.98 00:42:30.601 clat (usec): min=15663, max=46040, avg=30326.42, stdev=1932.35 00:42:30.601 lat (usec): min=15672, max=46063, avg=30353.44, stdev=1932.81 00:42:30.601 clat percentiles (usec): 00:42:30.601 | 1.00th=[20841], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.601 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.601 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.601 | 99.00th=[40109], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:42:30.601 | 99.99th=[45876] 00:42:30.602 bw ( KiB/s): min= 2032, max= 2176, per=4.15%, avg=2094.63, stdev=56.92, samples=19 00:42:30.602 iops : min= 508, max= 544, avg=523.58, stdev=14.21, samples=19 00:42:30.602 lat (msec) : 20=0.38%, 50=99.62% 00:42:30.602 cpu : usr=98.35%, sys=1.27%, ctx=13, majf=0, minf=9 00:42:30.602 IO depths : 1=2.0%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename1: (groupid=0, jobs=1): err= 0: pid=3676628: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10004msec) 00:42:30.602 slat (nsec): min=6156, max=81279, avg=24285.98, stdev=8118.47 00:42:30.602 clat (usec): min=10522, max=61327, avg=30276.47, stdev=2164.98 00:42:30.602 lat (usec): min=10543, max=61345, avg=30300.76, stdev=2163.87 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.602 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.602 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.602 | 99.00th=[31327], 99.50th=[31589], 99.90th=[61080], 99.95th=[61080], 00:42:30.602 | 99.99th=[61080] 00:42:30.602 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2087.89, stdev=74.39, samples=19 00:42:30.602 iops : min= 480, max= 544, avg=521.89, stdev=18.58, samples=19 00:42:30.602 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:42:30.602 cpu : usr=98.14%, sys=1.48%, ctx=14, majf=0, minf=9 00:42:30.602 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename1: (groupid=0, jobs=1): err= 0: pid=3676629: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=524, BW=2098KiB/s (2149kB/s)(20.5MiB/10004msec) 00:42:30.602 slat (nsec): min=5669, max=60504, avg=29180.53, stdev=9407.49 00:42:30.602 clat (usec): min=19743, max=41070, avg=30238.38, stdev=870.78 00:42:30.602 lat (usec): min=19751, max=41096, avg=30267.56, stdev=870.76 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:42:30.602 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:42:30.602 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.602 | 99.00th=[31327], 99.50th=[32113], 99.90th=[40109], 99.95th=[40633], 00:42:30.602 | 99.99th=[41157] 00:42:30.602 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2094.63, stdev=63.31, samples=19 00:42:30.602 iops : min= 510, max= 544, avg=523.58, stdev=15.81, samples=19 00:42:30.602 lat (msec) : 20=0.04%, 50=99.96% 00:42:30.602 cpu : usr=98.24%, sys=1.38%, ctx=16, majf=0, minf=9 00:42:30.602 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename2: (groupid=0, jobs=1): err= 0: pid=3676630: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=548, BW=2194KiB/s (2247kB/s)(21.4MiB/10002msec) 00:42:30.602 slat (nsec): min=6286, max=71265, avg=15207.22, stdev=8885.16 00:42:30.602 clat (usec): min=9256, max=73265, avg=29113.22, stdev=4784.60 00:42:30.602 lat (usec): min=9263, max=73284, avg=29128.43, stdev=4784.95 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[13960], 5.00th=[20055], 10.00th=[22938], 20.00th=[27657], 00:42:30.602 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.602 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[32637], 00:42:30.602 | 99.00th=[42730], 99.50th=[44303], 99.90th=[60031], 99.95th=[60031], 00:42:30.602 | 99.99th=[72877] 00:42:30.602 bw ( KiB/s): min= 1904, max= 2352, per=4.33%, avg=2183.53, stdev=98.11, samples=19 00:42:30.602 iops : min= 476, max= 588, avg=545.84, stdev=24.47, samples=19 00:42:30.602 lat (msec) : 10=0.18%, 20=4.59%, 50=94.88%, 100=0.35% 00:42:30.602 cpu : usr=98.27%, sys=1.35%, ctx=17, majf=0, minf=9 00:42:30.602 IO depths : 1=0.1%, 2=0.1%, 4=1.7%, 8=81.0%, 16=17.1%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename2: (groupid=0, jobs=1): err= 0: pid=3676631: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=524, BW=2096KiB/s (2147kB/s)(20.5MiB/10014msec) 00:42:30.602 slat (nsec): min=4318, max=49270, avg=16096.75, stdev=5574.47 00:42:30.602 clat (usec): min=27758, max=44325, avg=30380.71, stdev=820.93 00:42:30.602 lat (usec): min=27785, max=44340, avg=30396.81, stdev=820.33 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:42:30.602 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.602 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.602 | 99.00th=[31065], 99.50th=[31589], 99.90th=[44303], 99.95th=[44303], 00:42:30.602 | 99.99th=[44303] 00:42:30.602 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2094.79, stdev=76.43, samples=19 00:42:30.602 iops : min= 480, max= 544, avg=523.58, stdev=19.26, samples=19 00:42:30.602 lat (msec) : 50=100.00% 00:42:30.602 cpu : usr=98.18%, sys=1.34%, ctx=57, majf=0, minf=9 00:42:30.602 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename2: (groupid=0, jobs=1): err= 0: pid=3676632: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.6MiB/10021msec) 00:42:30.602 slat (nsec): min=7376, max=71392, avg=18940.46, stdev=11575.53 00:42:30.602 clat (usec): min=18027, max=42570, avg=30274.02, stdev=990.28 00:42:30.602 lat (usec): min=18036, max=42597, avg=30292.96, stdev=990.59 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.602 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.602 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:42:30.602 | 99.00th=[31327], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:42:30.602 | 99.99th=[42730] 00:42:30.602 bw ( KiB/s): min= 2043, max= 2176, per=4.17%, avg=2101.37, stdev=64.86, samples=19 00:42:30.602 iops : min= 510, max= 544, avg=525.26, stdev=16.21, samples=19 00:42:30.602 lat (msec) : 20=0.30%, 50=99.70% 00:42:30.602 cpu : usr=98.24%, sys=1.39%, ctx=15, majf=0, minf=9 00:42:30.602 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename2: (groupid=0, jobs=1): err= 0: pid=3676633: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=528, BW=2113KiB/s (2164kB/s)(20.7MiB/10026msec) 00:42:30.602 slat (nsec): min=3455, max=67737, avg=25837.87, stdev=10053.61 00:42:30.602 clat (usec): min=8235, max=37722, avg=30086.50, stdev=1933.45 00:42:30.602 lat (usec): min=8251, max=37749, avg=30112.34, stdev=1934.21 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[21890], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.602 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.602 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.602 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:42:30.602 | 99.99th=[37487] 00:42:30.602 bw ( KiB/s): min= 2043, max= 2304, per=4.19%, avg=2114.84, stdev=78.35, samples=19 00:42:30.602 iops : min= 510, max= 576, avg=528.63, stdev=19.60, samples=19 00:42:30.602 lat (msec) : 10=0.30%, 20=0.64%, 50=99.06% 00:42:30.602 cpu : usr=98.52%, sys=1.09%, ctx=16, majf=0, minf=9 00:42:30.602 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename2: (groupid=0, jobs=1): err= 0: pid=3676634: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=527, BW=2112KiB/s (2162kB/s)(20.6MiB/10005msec) 00:42:30.602 slat (nsec): min=5997, max=56027, avg=16928.32, stdev=8200.91 00:42:30.602 clat (usec): min=10822, max=49244, avg=30173.32, stdev=2221.99 00:42:30.602 lat (usec): min=10831, max=49284, avg=30190.25, stdev=2222.36 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[17957], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:42:30.602 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.602 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.602 | 99.00th=[35390], 99.50th=[42730], 99.90th=[48497], 99.95th=[49021], 00:42:30.602 | 99.99th=[49021] 00:42:30.602 bw ( KiB/s): min= 1920, max= 2288, per=4.18%, avg=2109.47, stdev=88.97, samples=19 00:42:30.602 iops : min= 480, max= 572, avg=527.37, stdev=22.24, samples=19 00:42:30.602 lat (msec) : 20=1.21%, 50=98.79% 00:42:30.602 cpu : usr=98.36%, sys=1.27%, ctx=13, majf=0, minf=9 00:42:30.602 IO depths : 1=5.5%, 2=11.4%, 4=23.9%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:42:30.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.602 issued rwts: total=5282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.602 filename2: (groupid=0, jobs=1): err= 0: pid=3676635: Mon Dec 16 06:11:03 2024 00:42:30.602 read: IOPS=527, BW=2112KiB/s (2162kB/s)(20.6MiB/10001msec) 00:42:30.602 slat (nsec): min=4536, max=63923, avg=17564.84, stdev=8505.84 00:42:30.602 clat (usec): min=8202, max=40765, avg=30167.53, stdev=1836.83 00:42:30.602 lat (usec): min=8210, max=40779, avg=30185.10, stdev=1836.75 00:42:30.602 clat percentiles (usec): 00:42:30.602 | 1.00th=[21890], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:42:30.603 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.603 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.603 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32113], 99.95th=[32375], 00:42:30.603 | 99.99th=[40633] 00:42:30.603 bw ( KiB/s): min= 2043, max= 2308, per=4.19%, avg=2115.32, stdev=79.09, samples=19 00:42:30.603 iops : min= 510, max= 577, avg=528.79, stdev=19.81, samples=19 00:42:30.603 lat (msec) : 10=0.30%, 20=0.64%, 50=99.05% 00:42:30.603 cpu : usr=98.45%, sys=1.17%, ctx=24, majf=0, minf=9 00:42:30.603 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:30.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.603 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.603 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.603 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.603 filename2: (groupid=0, jobs=1): err= 0: pid=3676636: Mon Dec 16 06:11:03 2024 00:42:30.603 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10002msec) 00:42:30.603 slat (nsec): min=7570, max=79844, avg=22520.55, stdev=7885.47 00:42:30.603 clat (usec): min=10714, max=59983, avg=30280.37, stdev=2090.68 00:42:30.603 lat (usec): min=10735, max=59998, avg=30302.89, stdev=2090.90 00:42:30.603 clat percentiles (usec): 00:42:30.603 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:42:30.603 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:42:30.603 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:42:30.603 | 99.00th=[31327], 99.50th=[31589], 99.90th=[60031], 99.95th=[60031], 00:42:30.603 | 99.99th=[60031] 00:42:30.603 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.37, stdev=74.11, samples=19 00:42:30.603 iops : min= 480, max= 544, avg=522.05, stdev=18.48, samples=19 00:42:30.603 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:42:30.603 cpu : usr=98.46%, sys=1.18%, ctx=15, majf=0, minf=9 00:42:30.603 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:30.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.603 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.603 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.603 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.603 filename2: (groupid=0, jobs=1): err= 0: pid=3676637: Mon Dec 16 06:11:03 2024 00:42:30.603 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10003msec) 00:42:30.603 slat (nsec): min=8603, max=97384, avg=40981.26, stdev=18685.72 00:42:30.603 clat (usec): min=10472, max=61092, avg=30122.46, stdev=2160.88 00:42:30.603 lat (usec): min=10535, max=61115, avg=30163.44, stdev=2159.51 00:42:30.603 clat percentiles (usec): 00:42:30.603 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:42:30.603 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:42:30.603 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:42:30.603 | 99.00th=[31065], 99.50th=[31327], 99.90th=[61080], 99.95th=[61080], 00:42:30.603 | 99.99th=[61080] 00:42:30.603 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2088.05, stdev=74.01, samples=19 00:42:30.603 iops : min= 480, max= 544, avg=521.89, stdev=18.58, samples=19 00:42:30.603 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:42:30.603 cpu : usr=98.69%, sys=0.90%, ctx=15, majf=0, minf=9 00:42:30.603 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:30.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.603 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.603 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.603 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:30.603 00:42:30.603 Run status group 0 (all jobs): 00:42:30.603 READ: bw=49.2MiB/s (51.6MB/s), 2093KiB/s-2194KiB/s (2143kB/s-2247kB/s), io=494MiB (518MB), run=10001-10026msec 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 bdev_null0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 [2024-12-16 06:11:03.375743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 bdev_null1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.603 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:30.604 { 00:42:30.604 "params": { 00:42:30.604 "name": "Nvme$subsystem", 00:42:30.604 "trtype": "$TEST_TRANSPORT", 00:42:30.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.604 "adrfam": "ipv4", 00:42:30.604 "trsvcid": "$NVMF_PORT", 00:42:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.604 "hdgst": ${hdgst:-false}, 00:42:30.604 "ddgst": ${ddgst:-false} 00:42:30.604 }, 00:42:30.604 "method": "bdev_nvme_attach_controller" 00:42:30.604 } 00:42:30.604 EOF 00:42:30.604 )") 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:30.604 { 00:42:30.604 "params": { 00:42:30.604 "name": "Nvme$subsystem", 00:42:30.604 "trtype": "$TEST_TRANSPORT", 00:42:30.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.604 "adrfam": "ipv4", 00:42:30.604 "trsvcid": "$NVMF_PORT", 00:42:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.604 "hdgst": ${hdgst:-false}, 00:42:30.604 "ddgst": ${ddgst:-false} 00:42:30.604 }, 00:42:30.604 "method": "bdev_nvme_attach_controller" 00:42:30.604 } 00:42:30.604 EOF 00:42:30.604 )") 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:30.604 "params": { 00:42:30.604 "name": "Nvme0", 00:42:30.604 "trtype": "tcp", 00:42:30.604 "traddr": "10.0.0.2", 00:42:30.604 "adrfam": "ipv4", 00:42:30.604 "trsvcid": "4420", 00:42:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:30.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:30.604 "hdgst": false, 00:42:30.604 "ddgst": false 00:42:30.604 }, 00:42:30.604 "method": "bdev_nvme_attach_controller" 00:42:30.604 },{ 00:42:30.604 "params": { 00:42:30.604 "name": "Nvme1", 00:42:30.604 "trtype": "tcp", 00:42:30.604 "traddr": "10.0.0.2", 00:42:30.604 "adrfam": "ipv4", 00:42:30.604 "trsvcid": "4420", 00:42:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.604 "hdgst": false, 00:42:30.604 "ddgst": false 00:42:30.604 }, 00:42:30.604 "method": "bdev_nvme_attach_controller" 00:42:30.604 }' 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:30.604 06:11:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:30.604 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:30.604 ... 00:42:30.604 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:30.604 ... 00:42:30.604 fio-3.35 00:42:30.604 Starting 4 threads 00:42:35.861 00:42:35.861 filename0: (groupid=0, jobs=1): err= 0: pid=3678528: Mon Dec 16 06:11:09 2024 00:42:35.861 read: IOPS=2525, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5001msec) 00:42:35.861 slat (nsec): min=6036, max=39495, avg=9124.25, stdev=3440.98 00:42:35.861 clat (usec): min=1045, max=5624, avg=3140.70, stdev=536.10 00:42:35.861 lat (usec): min=1061, max=5631, avg=3149.82, stdev=535.66 00:42:35.861 clat percentiles (usec): 00:42:35.861 | 1.00th=[ 2147], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2802], 00:42:35.861 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3097], 00:42:35.861 | 70.00th=[ 3195], 80.00th=[ 3425], 90.00th=[ 3785], 95.00th=[ 4293], 00:42:35.861 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5473], 99.95th=[ 5473], 00:42:35.861 | 99.99th=[ 5604] 00:42:35.861 bw ( KiB/s): min=18880, max=21104, per=23.66%, avg=20200.70, stdev=739.53, samples=10 00:42:35.861 iops : min= 2360, max= 2638, avg=2525.00, stdev=92.50, samples=10 00:42:35.861 lat (msec) : 2=0.60%, 4=91.00%, 10=8.40% 00:42:35.861 cpu : usr=96.50%, sys=3.20%, ctx=7, majf=0, minf=9 00:42:35.861 IO depths : 1=0.4%, 2=2.5%, 4=69.7%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:35.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 issued rwts: total=12631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:35.861 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:35.861 filename0: (groupid=0, jobs=1): err= 0: pid=3678529: Mon Dec 16 06:11:09 2024 00:42:35.861 read: IOPS=2847, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:42:35.861 slat (nsec): min=6018, max=31908, avg=9194.41, stdev=3287.14 00:42:35.861 clat (usec): min=684, max=5353, avg=2781.47, stdev=499.30 00:42:35.861 lat (usec): min=700, max=5363, avg=2790.66, stdev=499.08 00:42:35.861 clat percentiles (usec): 00:42:35.861 | 1.00th=[ 1483], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2442], 00:42:35.861 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:42:35.861 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3261], 95.00th=[ 3621], 00:42:35.861 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[ 5145], 00:42:35.861 | 99.99th=[ 5342] 00:42:35.861 bw ( KiB/s): min=21296, max=25024, per=26.68%, avg=22780.80, stdev=1120.86, samples=10 00:42:35.861 iops : min= 2662, max= 3128, avg=2847.60, stdev=140.11, samples=10 00:42:35.861 lat (usec) : 750=0.01%, 1000=0.39% 00:42:35.861 lat (msec) : 2=2.92%, 4=94.00%, 10=2.68% 00:42:35.861 cpu : usr=95.50%, sys=4.18%, ctx=9, majf=0, minf=0 00:42:35.861 IO depths : 1=0.2%, 2=8.2%, 4=63.1%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:35.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 issued rwts: total=14241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:35.861 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:35.861 filename1: (groupid=0, jobs=1): err= 0: pid=3678530: Mon Dec 16 06:11:09 2024 00:42:35.861 read: IOPS=2602, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:42:35.861 slat (usec): min=6, max=157, avg= 9.49, stdev= 3.67 00:42:35.861 clat (usec): min=626, max=5421, avg=3046.10, stdev=493.74 00:42:35.861 lat (usec): min=637, max=5431, avg=3055.60, stdev=493.42 00:42:35.861 clat percentiles (usec): 00:42:35.861 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:42:35.861 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:42:35.861 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3654], 95.00th=[ 4080], 00:42:35.861 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5145], 99.95th=[ 5211], 00:42:35.861 | 99.99th=[ 5407] 00:42:35.861 bw ( KiB/s): min=20272, max=21280, per=24.34%, avg=20782.22, stdev=368.47, samples=9 00:42:35.861 iops : min= 2534, max= 2660, avg=2597.78, stdev=46.06, samples=9 00:42:35.861 lat (usec) : 750=0.05%, 1000=0.04% 00:42:35.861 lat (msec) : 2=0.65%, 4=93.66%, 10=5.60% 00:42:35.861 cpu : usr=95.92%, sys=3.78%, ctx=7, majf=0, minf=9 00:42:35.861 IO depths : 1=0.1%, 2=5.0%, 4=65.8%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:35.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 issued rwts: total=13016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:35.861 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:35.861 filename1: (groupid=0, jobs=1): err= 0: pid=3678531: Mon Dec 16 06:11:09 2024 00:42:35.861 read: IOPS=2698, BW=21.1MiB/s (22.1MB/s)(105MiB/5002msec) 00:42:35.861 slat (nsec): min=6044, max=33197, avg=9581.18, stdev=3514.24 00:42:35.861 clat (usec): min=800, max=5358, avg=2935.34, stdev=508.23 00:42:35.861 lat (usec): min=812, max=5371, avg=2944.92, stdev=508.04 00:42:35.861 clat percentiles (usec): 00:42:35.861 | 1.00th=[ 1860], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2573], 00:42:35.861 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2966], 00:42:35.861 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3556], 95.00th=[ 3916], 00:42:35.861 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5276], 00:42:35.861 | 99.99th=[ 5342] 00:42:35.861 bw ( KiB/s): min=19936, max=22480, per=25.29%, avg=21590.40, stdev=855.84, samples=10 00:42:35.861 iops : min= 2492, max= 2810, avg=2698.80, stdev=106.98, samples=10 00:42:35.861 lat (usec) : 1000=0.03% 00:42:35.861 lat (msec) : 2=1.68%, 4=93.72%, 10=4.57% 00:42:35.861 cpu : usr=95.80%, sys=3.86%, ctx=7, majf=0, minf=0 00:42:35.861 IO depths : 1=0.2%, 2=6.6%, 4=64.4%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:35.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:35.861 issued rwts: total=13499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:35.861 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:35.861 00:42:35.861 Run status group 0 (all jobs): 00:42:35.861 READ: bw=83.4MiB/s (87.4MB/s), 19.7MiB/s-22.2MiB/s (20.7MB/s-23.3MB/s), io=417MiB (437MB), run=5001-5002msec 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:35.861 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.862 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:36.120 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.120 00:42:36.120 real 0m24.175s 00:42:36.120 user 4m51.497s 00:42:36.120 sys 0m5.550s 00:42:36.120 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:36.120 06:11:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:36.120 ************************************ 00:42:36.120 END TEST fio_dif_rand_params 00:42:36.120 ************************************ 00:42:36.120 06:11:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:36.120 06:11:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:36.120 06:11:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:36.120 06:11:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:36.120 ************************************ 00:42:36.120 START TEST fio_dif_digest 00:42:36.120 ************************************ 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:36.120 bdev_null0 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:36.120 [2024-12-16 06:11:09.828400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:36.120 { 00:42:36.120 "params": { 00:42:36.120 "name": "Nvme$subsystem", 00:42:36.120 "trtype": "$TEST_TRANSPORT", 00:42:36.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:36.120 "adrfam": "ipv4", 00:42:36.120 "trsvcid": "$NVMF_PORT", 00:42:36.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:36.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:36.120 "hdgst": ${hdgst:-false}, 00:42:36.120 "ddgst": ${ddgst:-false} 00:42:36.120 }, 00:42:36.120 "method": "bdev_nvme_attach_controller" 00:42:36.120 } 00:42:36.120 EOF 00:42:36.120 )") 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:36.120 "params": { 00:42:36.120 "name": "Nvme0", 00:42:36.120 "trtype": "tcp", 00:42:36.120 "traddr": "10.0.0.2", 00:42:36.120 "adrfam": "ipv4", 00:42:36.120 "trsvcid": "4420", 00:42:36.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:36.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:36.120 "hdgst": true, 00:42:36.120 "ddgst": true 00:42:36.120 }, 00:42:36.120 "method": "bdev_nvme_attach_controller" 00:42:36.120 }' 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:36.120 06:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:36.378 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:36.378 ... 00:42:36.378 fio-3.35 00:42:36.378 Starting 3 threads 00:42:48.573 00:42:48.574 filename0: (groupid=0, jobs=1): err= 0: pid=3679567: Mon Dec 16 06:11:20 2024 00:42:48.574 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10046msec) 00:42:48.574 slat (nsec): min=6314, max=29052, avg=11893.81, stdev=2113.91 00:42:48.574 clat (usec): min=6296, max=52157, avg=10122.64, stdev=1254.08 00:42:48.574 lat (usec): min=6306, max=52170, avg=10134.53, stdev=1254.07 00:42:48.574 clat percentiles (usec): 00:42:48.574 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:42:48.574 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:42:48.574 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:42:48.574 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12387], 99.95th=[47973], 00:42:48.574 | 99.99th=[52167] 00:42:48.574 bw ( KiB/s): min=37120, max=38912, per=34.84%, avg=37977.60, stdev=464.49, samples=20 00:42:48.574 iops : min= 290, max= 304, avg=296.70, stdev= 3.63, samples=20 00:42:48.574 lat (msec) : 10=43.01%, 20=56.92%, 50=0.03%, 100=0.03% 00:42:48.574 cpu : usr=93.99%, sys=5.72%, ctx=21, majf=0, minf=22 00:42:48.574 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.574 issued rwts: total=2969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.574 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:48.574 filename0: (groupid=0, jobs=1): err= 0: pid=3679568: Mon Dec 16 06:11:20 2024 00:42:48.574 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(349MiB/10045msec) 00:42:48.574 slat (nsec): min=6393, max=23025, avg=11948.90, stdev=2031.43 00:42:48.574 clat (usec): min=6986, max=48283, avg=10752.28, stdev=1204.01 00:42:48.574 lat (usec): min=7000, max=48290, avg=10764.23, stdev=1203.97 00:42:48.574 clat percentiles (usec): 00:42:48.574 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:42:48.574 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:42:48.574 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:42:48.574 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13829], 99.95th=[44303], 00:42:48.574 | 99.99th=[48497] 00:42:48.574 bw ( KiB/s): min=34746, max=36608, per=32.79%, avg=35746.90, stdev=500.48, samples=20 00:42:48.574 iops : min= 271, max= 286, avg=279.25, stdev= 3.96, samples=20 00:42:48.574 lat (msec) : 10=14.78%, 20=85.15%, 50=0.07% 00:42:48.574 cpu : usr=93.29%, sys=6.41%, ctx=28, majf=0, minf=38 00:42:48.574 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.574 issued rwts: total=2795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.574 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:48.574 filename0: (groupid=0, jobs=1): err= 0: pid=3679569: Mon Dec 16 06:11:20 2024 00:42:48.574 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(349MiB/10045msec) 00:42:48.574 slat (nsec): min=6356, max=23811, avg=11863.39, stdev=1952.53 00:42:48.574 clat (usec): min=8251, max=47467, avg=10764.11, stdev=1184.77 00:42:48.574 lat (usec): min=8264, max=47480, avg=10775.97, stdev=1184.78 00:42:48.574 clat percentiles (usec): 00:42:48.574 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:42:48.574 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:42:48.574 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:42:48.574 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13698], 99.95th=[44827], 00:42:48.574 | 99.99th=[47449] 00:42:48.574 bw ( KiB/s): min=34816, max=36608, per=32.76%, avg=35708.35, stdev=463.27, samples=20 00:42:48.574 iops : min= 272, max= 286, avg=278.95, stdev= 3.61, samples=20 00:42:48.574 lat (msec) : 10=13.86%, 20=86.07%, 50=0.07% 00:42:48.574 cpu : usr=94.15%, sys=5.56%, ctx=15, majf=0, minf=18 00:42:48.574 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.574 issued rwts: total=2792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.574 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:48.574 00:42:48.574 Run status group 0 (all jobs): 00:42:48.574 READ: bw=106MiB/s (112MB/s), 34.7MiB/s-36.9MiB/s (36.4MB/s-38.7MB/s), io=1070MiB (1121MB), run=10045-10046msec 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:48.574 00:42:48.574 real 0m11.148s 00:42:48.574 user 0m34.501s 00:42:48.574 sys 0m2.052s 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:48.574 06:11:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:48.574 ************************************ 00:42:48.574 END TEST fio_dif_digest 00:42:48.574 ************************************ 00:42:48.574 06:11:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:48.574 06:11:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:48.574 06:11:20 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:48.574 06:11:20 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:48.574 06:11:20 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:48.574 06:11:20 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:48.574 06:11:20 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:48.574 06:11:20 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:48.574 rmmod nvme_tcp 00:42:48.574 rmmod nvme_fabrics 00:42:48.574 rmmod nvme_keyring 00:42:48.574 06:11:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:48.574 06:11:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:48.574 06:11:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:48.574 06:11:21 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 3671391 ']' 00:42:48.574 06:11:21 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 3671391 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3671391 ']' 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3671391 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3671391 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3671391' 00:42:48.574 killing process with pid 3671391 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3671391 00:42:48.574 06:11:21 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3671391 00:42:48.574 06:11:21 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:42:48.574 06:11:21 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:49.950 Waiting for block devices as requested 00:42:49.950 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:42:49.950 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:49.950 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:50.208 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:50.208 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:50.208 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:50.208 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:50.466 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:50.466 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:50.466 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:50.466 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:50.724 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:50.724 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:50.724 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:50.724 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:50.983 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:50.983 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:50.983 06:11:24 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:50.983 06:11:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:50.983 06:11:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.511 06:11:26 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:53.511 00:42:53.511 real 1m12.499s 00:42:53.511 user 7m7.300s 00:42:53.511 sys 0m20.509s 00:42:53.511 06:11:26 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:53.511 06:11:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:53.511 ************************************ 00:42:53.511 END TEST nvmf_dif 00:42:53.511 ************************************ 00:42:53.511 06:11:26 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:53.511 06:11:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:53.511 06:11:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:53.511 06:11:26 -- common/autotest_common.sh@10 -- # set +x 00:42:53.511 ************************************ 00:42:53.511 START TEST nvmf_abort_qd_sizes 00:42:53.511 ************************************ 00:42:53.511 06:11:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:53.511 * Looking for test storage... 00:42:53.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:53.511 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:53.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.512 --rc genhtml_branch_coverage=1 00:42:53.512 --rc genhtml_function_coverage=1 00:42:53.512 --rc genhtml_legend=1 00:42:53.512 --rc geninfo_all_blocks=1 00:42:53.512 --rc geninfo_unexecuted_blocks=1 00:42:53.512 00:42:53.512 ' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:53.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.512 --rc genhtml_branch_coverage=1 00:42:53.512 --rc genhtml_function_coverage=1 00:42:53.512 --rc genhtml_legend=1 00:42:53.512 --rc geninfo_all_blocks=1 00:42:53.512 --rc geninfo_unexecuted_blocks=1 00:42:53.512 00:42:53.512 ' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:53.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.512 --rc genhtml_branch_coverage=1 00:42:53.512 --rc genhtml_function_coverage=1 00:42:53.512 --rc genhtml_legend=1 00:42:53.512 --rc geninfo_all_blocks=1 00:42:53.512 --rc geninfo_unexecuted_blocks=1 00:42:53.512 00:42:53.512 ' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:53.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.512 --rc genhtml_branch_coverage=1 00:42:53.512 --rc genhtml_function_coverage=1 00:42:53.512 --rc genhtml_legend=1 00:42:53.512 --rc geninfo_all_blocks=1 00:42:53.512 --rc geninfo_unexecuted_blocks=1 00:42:53.512 00:42:53.512 ' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:53.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:53.512 06:11:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:58.776 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:58.776 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:58.776 Found net devices under 0000:af:00.0: cvl_0_0 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:58.776 Found net devices under 0000:af:00.1: cvl_0_1 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:58.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:58.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:42:58.776 00:42:58.776 --- 10.0.0.2 ping statistics --- 00:42:58.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:58.776 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:42:58.776 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:58.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:58.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:42:58.776 00:42:58.776 --- 10.0.0.1 ping statistics --- 00:42:58.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:58.776 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:42:58.777 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:58.777 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:42:58.777 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:42:58.777 06:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:02.058 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:02.058 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:02.624 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=3687227 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 3687227 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3687227 ']' 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:02.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:02.624 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:02.624 [2024-12-16 06:11:36.428603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:02.624 [2024-12-16 06:11:36.428652] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:02.882 [2024-12-16 06:11:36.491184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:02.882 [2024-12-16 06:11:36.533013] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:02.882 [2024-12-16 06:11:36.533052] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:02.882 [2024-12-16 06:11:36.533060] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:02.882 [2024-12-16 06:11:36.533066] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:02.882 [2024-12-16 06:11:36.533071] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:02.882 [2024-12-16 06:11:36.533115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:02.882 [2024-12-16 06:11:36.533217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:02.882 [2024-12-16 06:11:36.533306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:02.882 [2024-12-16 06:11:36.533307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:02.882 06:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:02.882 ************************************ 00:43:02.882 START TEST spdk_target_abort 00:43:02.882 ************************************ 00:43:02.882 06:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:43:02.882 06:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:02.882 06:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:02.882 06:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:02.882 06:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:06.159 spdk_targetn1 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:06.159 [2024-12-16 06:11:39.541476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:06.159 [2024-12-16 06:11:39.574870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:06.159 06:11:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:09.444 Initializing NVMe Controllers 00:43:09.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:09.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:09.444 Initialization complete. Launching workers. 00:43:09.444 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15952, failed: 0 00:43:09.444 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1385, failed to submit 14567 00:43:09.444 success 789, unsuccessful 596, failed 0 00:43:09.444 06:11:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:09.444 06:11:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:12.815 Initializing NVMe Controllers 00:43:12.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:12.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:12.815 Initialization complete. Launching workers. 00:43:12.815 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8600, failed: 0 00:43:12.815 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1258, failed to submit 7342 00:43:12.815 success 319, unsuccessful 939, failed 0 00:43:12.815 06:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:12.815 06:11:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:16.096 Initializing NVMe Controllers 00:43:16.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:16.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:16.096 Initialization complete. Launching workers. 00:43:16.096 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38536, failed: 0 00:43:16.096 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2856, failed to submit 35680 00:43:16.096 success 584, unsuccessful 2272, failed 0 00:43:16.096 06:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:16.096 06:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:16.096 06:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:16.096 06:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:16.096 06:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:16.096 06:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:16.096 06:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3687227 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3687227 ']' 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3687227 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3687227 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3687227' 00:43:17.029 killing process with pid 3687227 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3687227 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3687227 00:43:17.029 00:43:17.029 real 0m14.156s 00:43:17.029 user 0m54.170s 00:43:17.029 sys 0m2.295s 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:17.029 06:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:17.029 ************************************ 00:43:17.029 END TEST spdk_target_abort 00:43:17.029 ************************************ 00:43:17.287 06:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:17.287 06:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:17.287 06:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:17.287 06:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:17.287 ************************************ 00:43:17.287 START TEST kernel_target_abort 00:43:17.287 ************************************ 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:17.287 06:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:19.187 Waiting for block devices as requested 00:43:19.445 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:19.445 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:19.445 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:19.703 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:19.703 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:19.703 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:19.703 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:19.960 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:19.960 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:19.960 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:20.217 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:20.217 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:20.217 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:20.217 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:20.475 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:20.475 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:20.475 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:20.732 No valid GPT data, bailing 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:20.732 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:20.732 00:43:20.732 Discovery Log Number of Records 2, Generation counter 2 00:43:20.732 =====Discovery Log Entry 0====== 00:43:20.732 trtype: tcp 00:43:20.732 adrfam: ipv4 00:43:20.732 subtype: current discovery subsystem 00:43:20.732 treq: not specified, sq flow control disable supported 00:43:20.732 portid: 1 00:43:20.732 trsvcid: 4420 00:43:20.732 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:20.732 traddr: 10.0.0.1 00:43:20.732 eflags: none 00:43:20.732 sectype: none 00:43:20.732 =====Discovery Log Entry 1====== 00:43:20.732 trtype: tcp 00:43:20.732 adrfam: ipv4 00:43:20.733 subtype: nvme subsystem 00:43:20.733 treq: not specified, sq flow control disable supported 00:43:20.733 portid: 1 00:43:20.733 trsvcid: 4420 00:43:20.733 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:20.733 traddr: 10.0.0.1 00:43:20.733 eflags: none 00:43:20.733 sectype: none 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:20.733 06:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:24.009 Initializing NVMe Controllers 00:43:24.009 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:24.009 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:24.009 Initialization complete. Launching workers. 00:43:24.009 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94739, failed: 0 00:43:24.009 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94739, failed to submit 0 00:43:24.009 success 0, unsuccessful 94739, failed 0 00:43:24.009 06:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:24.009 06:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:27.287 Initializing NVMe Controllers 00:43:27.287 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:27.287 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:27.287 Initialization complete. Launching workers. 00:43:27.287 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 148335, failed: 0 00:43:27.287 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37202, failed to submit 111133 00:43:27.287 success 0, unsuccessful 37202, failed 0 00:43:27.287 06:12:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:27.287 06:12:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:30.562 Initializing NVMe Controllers 00:43:30.562 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:30.562 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:30.562 Initialization complete. Launching workers. 00:43:30.562 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 140075, failed: 0 00:43:30.562 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35074, failed to submit 105001 00:43:30.562 success 0, unsuccessful 35074, failed 0 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:30.562 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:43:30.563 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:43:30.563 06:12:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:32.462 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:32.462 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:33.398 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:33.398 00:43:33.398 real 0m16.114s 00:43:33.398 user 0m8.341s 00:43:33.398 sys 0m4.144s 00:43:33.398 06:12:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:33.398 06:12:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:33.398 ************************************ 00:43:33.398 END TEST kernel_target_abort 00:43:33.398 ************************************ 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:33.398 rmmod nvme_tcp 00:43:33.398 rmmod nvme_fabrics 00:43:33.398 rmmod nvme_keyring 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 3687227 ']' 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 3687227 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3687227 ']' 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3687227 00:43:33.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3687227) - No such process 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3687227 is not found' 00:43:33.398 Process with pid 3687227 is not found 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:43:33.398 06:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:35.927 Waiting for block devices as requested 00:43:35.927 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:35.927 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:36.185 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:36.185 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:36.185 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:36.185 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:36.443 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:36.443 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:36.443 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:36.443 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:36.702 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:36.702 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:36.702 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:36.961 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:36.961 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:36.961 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:36.961 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:37.220 06:12:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:39.117 06:12:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:39.117 00:43:39.117 real 0m46.008s 00:43:39.117 user 1m6.510s 00:43:39.117 sys 0m14.571s 00:43:39.117 06:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:39.117 06:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:39.117 ************************************ 00:43:39.117 END TEST nvmf_abort_qd_sizes 00:43:39.117 ************************************ 00:43:39.380 06:12:12 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:39.380 06:12:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:39.380 06:12:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:39.380 06:12:12 -- common/autotest_common.sh@10 -- # set +x 00:43:39.380 ************************************ 00:43:39.380 START TEST keyring_file 00:43:39.380 ************************************ 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:39.380 * Looking for test storage... 00:43:39.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:39.380 06:12:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.380 --rc genhtml_branch_coverage=1 00:43:39.380 --rc genhtml_function_coverage=1 00:43:39.380 --rc genhtml_legend=1 00:43:39.380 --rc geninfo_all_blocks=1 00:43:39.380 --rc geninfo_unexecuted_blocks=1 00:43:39.380 00:43:39.380 ' 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.380 --rc genhtml_branch_coverage=1 00:43:39.380 --rc genhtml_function_coverage=1 00:43:39.380 --rc genhtml_legend=1 00:43:39.380 --rc geninfo_all_blocks=1 00:43:39.380 --rc geninfo_unexecuted_blocks=1 00:43:39.380 00:43:39.380 ' 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.380 --rc genhtml_branch_coverage=1 00:43:39.380 --rc genhtml_function_coverage=1 00:43:39.380 --rc genhtml_legend=1 00:43:39.380 --rc geninfo_all_blocks=1 00:43:39.380 --rc geninfo_unexecuted_blocks=1 00:43:39.380 00:43:39.380 ' 00:43:39.380 06:12:13 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.380 --rc genhtml_branch_coverage=1 00:43:39.380 --rc genhtml_function_coverage=1 00:43:39.380 --rc genhtml_legend=1 00:43:39.380 --rc geninfo_all_blocks=1 00:43:39.380 --rc geninfo_unexecuted_blocks=1 00:43:39.380 00:43:39.380 ' 00:43:39.381 06:12:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:39.381 06:12:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:39.381 06:12:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:39.381 06:12:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:39.381 06:12:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:39.381 06:12:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.381 06:12:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.381 06:12:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.381 06:12:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:39.381 06:12:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:39.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:39.381 06:12:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:39.381 06:12:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:39.381 06:12:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:39.381 06:12:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:39.381 06:12:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:39.381 06:12:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dt527diHiG 00:43:39.381 06:12:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:43:39.381 06:12:13 keyring_file -- nvmf/common.sh@729 -- # python - 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dt527diHiG 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dt527diHiG 00:43:39.639 06:12:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dt527diHiG 00:43:39.639 06:12:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.F5y9udOhuk 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:39.639 06:12:13 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:39.639 06:12:13 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:43:39.639 06:12:13 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:39.639 06:12:13 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:43:39.639 06:12:13 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:43:39.639 06:12:13 keyring_file -- nvmf/common.sh@729 -- # python - 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.F5y9udOhuk 00:43:39.639 06:12:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.F5y9udOhuk 00:43:39.639 06:12:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.F5y9udOhuk 00:43:39.639 06:12:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=3696086 00:43:39.639 06:12:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3696086 00:43:39.639 06:12:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:39.639 06:12:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3696086 ']' 00:43:39.639 06:12:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:39.639 06:12:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:39.639 06:12:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:39.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:39.640 06:12:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:39.640 06:12:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:39.640 [2024-12-16 06:12:13.363425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:39.640 [2024-12-16 06:12:13.363474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3696086 ] 00:43:39.640 [2024-12-16 06:12:13.418095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:39.640 [2024-12-16 06:12:13.457737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:39.898 06:12:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:39.898 [2024-12-16 06:12:13.661313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:39.898 null0 00:43:39.898 [2024-12-16 06:12:13.693373] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:39.898 [2024-12-16 06:12:13.693659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:39.898 06:12:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:39.898 [2024-12-16 06:12:13.721440] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:39.898 request: 00:43:39.898 { 00:43:39.898 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:39.898 "secure_channel": false, 00:43:39.898 "listen_address": { 00:43:39.898 "trtype": "tcp", 00:43:39.898 "traddr": "127.0.0.1", 00:43:39.898 "trsvcid": "4420" 00:43:39.898 }, 00:43:39.898 "method": "nvmf_subsystem_add_listener", 00:43:39.898 "req_id": 1 00:43:39.898 } 00:43:39.898 Got JSON-RPC error response 00:43:39.898 response: 00:43:39.898 { 00:43:39.898 "code": -32602, 00:43:39.898 "message": "Invalid parameters" 00:43:39.898 } 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:39.898 06:12:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=3696099 00:43:39.898 06:12:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3696099 /var/tmp/bperf.sock 00:43:39.898 06:12:13 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3696099 ']' 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:39.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:39.898 06:12:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:40.156 [2024-12-16 06:12:13.772265] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:40.156 [2024-12-16 06:12:13.772306] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3696099 ] 00:43:40.156 [2024-12-16 06:12:13.826456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:40.156 [2024-12-16 06:12:13.864351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:40.156 06:12:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:40.156 06:12:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:40.156 06:12:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:40.156 06:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:40.414 06:12:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F5y9udOhuk 00:43:40.414 06:12:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F5y9udOhuk 00:43:40.676 06:12:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:40.676 06:12:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:40.676 06:12:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:40.676 06:12:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:40.676 06:12:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:40.937 06:12:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dt527diHiG == \/\t\m\p\/\t\m\p\.\d\t\5\2\7\d\i\H\i\G ]] 00:43:40.937 06:12:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:40.937 06:12:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:40.937 06:12:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:40.937 06:12:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:40.937 06:12:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:40.937 06:12:14 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.F5y9udOhuk == \/\t\m\p\/\t\m\p\.\F\5\y\9\u\d\O\h\u\k ]] 00:43:40.937 06:12:14 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:40.938 06:12:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:40.938 06:12:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:40.938 06:12:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:40.938 06:12:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:40.938 06:12:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:41.195 06:12:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:41.195 06:12:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:41.195 06:12:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:41.195 06:12:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:41.195 06:12:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:41.195 06:12:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:41.195 06:12:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:41.467 06:12:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:41.467 06:12:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:41.467 06:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:41.468 [2024-12-16 06:12:15.276724] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:41.730 nvme0n1 00:43:41.730 06:12:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:41.730 06:12:15 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:41.730 06:12:15 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:41.730 06:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:41.988 06:12:15 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:41.988 06:12:15 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:42.246 Running I/O for 1 seconds... 00:43:43.179 18700.00 IOPS, 73.05 MiB/s 00:43:43.179 Latency(us) 00:43:43.179 [2024-12-16T05:12:17.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:43.179 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:43.179 nvme0n1 : 1.00 18748.81 73.24 0.00 0.00 6814.64 4181.82 12233.39 00:43:43.179 [2024-12-16T05:12:17.035Z] =================================================================================================================== 00:43:43.179 [2024-12-16T05:12:17.035Z] Total : 18748.81 73.24 0.00 0.00 6814.64 4181.82 12233.39 00:43:43.179 { 00:43:43.179 "results": [ 00:43:43.179 { 00:43:43.179 "job": "nvme0n1", 00:43:43.179 "core_mask": "0x2", 00:43:43.179 "workload": "randrw", 00:43:43.179 "percentage": 50, 00:43:43.179 "status": "finished", 00:43:43.179 "queue_depth": 128, 00:43:43.179 "io_size": 4096, 00:43:43.179 "runtime": 1.004224, 00:43:43.179 "iops": 18748.805047479447, 00:43:43.179 "mibps": 73.23751971671659, 00:43:43.179 "io_failed": 0, 00:43:43.179 "io_timeout": 0, 00:43:43.179 "avg_latency_us": 6814.63976610317, 00:43:43.179 "min_latency_us": 4181.820952380953, 00:43:43.179 "max_latency_us": 12233.386666666667 00:43:43.179 } 00:43:43.179 ], 00:43:43.179 "core_count": 1 00:43:43.179 } 00:43:43.179 06:12:16 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:43.179 06:12:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:43.437 06:12:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:43.437 06:12:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:43.437 06:12:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:43.437 06:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:43.695 06:12:17 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:43.695 06:12:17 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:43.695 06:12:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:43.695 06:12:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:43.695 06:12:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:43.695 06:12:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:43.695 06:12:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:43.695 06:12:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:43.695 06:12:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:43.695 06:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:43.953 [2024-12-16 06:12:17.634943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:43.953 [2024-12-16 06:12:17.635291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1844d50 (107): Transport endpoint is not connected 00:43:43.953 [2024-12-16 06:12:17.636285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1844d50 (9): Bad file descriptor 00:43:43.953 [2024-12-16 06:12:17.637287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:43.953 [2024-12-16 06:12:17.637298] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:43.953 [2024-12-16 06:12:17.637305] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:43.953 [2024-12-16 06:12:17.637314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:43.953 request: 00:43:43.953 { 00:43:43.953 "name": "nvme0", 00:43:43.953 "trtype": "tcp", 00:43:43.953 "traddr": "127.0.0.1", 00:43:43.953 "adrfam": "ipv4", 00:43:43.953 "trsvcid": "4420", 00:43:43.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:43.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:43.953 "prchk_reftag": false, 00:43:43.953 "prchk_guard": false, 00:43:43.953 "hdgst": false, 00:43:43.953 "ddgst": false, 00:43:43.953 "psk": "key1", 00:43:43.953 "allow_unrecognized_csi": false, 00:43:43.953 "method": "bdev_nvme_attach_controller", 00:43:43.953 "req_id": 1 00:43:43.953 } 00:43:43.953 Got JSON-RPC error response 00:43:43.953 response: 00:43:43.953 { 00:43:43.953 "code": -5, 00:43:43.953 "message": "Input/output error" 00:43:43.953 } 00:43:43.953 06:12:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:43.953 06:12:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:43.953 06:12:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:43.953 06:12:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:43.953 06:12:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:43.953 06:12:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:43.953 06:12:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:43.953 06:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:43.953 06:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:43.953 06:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:44.212 06:12:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:44.212 06:12:17 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:44.212 06:12:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:44.212 06:12:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:44.212 06:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:44.212 06:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:44.212 06:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:44.212 06:12:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:44.212 06:12:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:44.212 06:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:44.469 06:12:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:44.469 06:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:44.726 06:12:18 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:44.726 06:12:18 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:44.726 06:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:44.984 06:12:18 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:44.984 06:12:18 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.dt527diHiG 00:43:44.984 06:12:18 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:44.984 06:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:44.984 [2024-12-16 06:12:18.779403] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dt527diHiG': 0100660 00:43:44.984 [2024-12-16 06:12:18.779435] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:44.984 request: 00:43:44.984 { 00:43:44.984 "name": "key0", 00:43:44.984 "path": "/tmp/tmp.dt527diHiG", 00:43:44.984 "method": "keyring_file_add_key", 00:43:44.984 "req_id": 1 00:43:44.984 } 00:43:44.984 Got JSON-RPC error response 00:43:44.984 response: 00:43:44.984 { 00:43:44.984 "code": -1, 00:43:44.984 "message": "Operation not permitted" 00:43:44.984 } 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:44.984 06:12:18 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:44.984 06:12:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.dt527diHiG 00:43:44.984 06:12:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:44.984 06:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dt527diHiG 00:43:45.242 06:12:18 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.dt527diHiG 00:43:45.242 06:12:18 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:45.242 06:12:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:45.242 06:12:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:45.242 06:12:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:45.242 06:12:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:45.242 06:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:45.499 06:12:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:45.499 06:12:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:45.499 06:12:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:43:45.499 06:12:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:45.499 06:12:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:45.499 06:12:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:45.499 06:12:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:45.499 06:12:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:45.499 06:12:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:45.499 06:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:45.499 [2024-12-16 06:12:19.352926] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dt527diHiG': No such file or directory 00:43:45.499 [2024-12-16 06:12:19.352950] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:45.499 [2024-12-16 06:12:19.352966] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:45.499 [2024-12-16 06:12:19.352974] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:45.499 [2024-12-16 06:12:19.352980] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:45.499 [2024-12-16 06:12:19.352986] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:45.757 request: 00:43:45.757 { 00:43:45.757 "name": "nvme0", 00:43:45.757 "trtype": "tcp", 00:43:45.757 "traddr": "127.0.0.1", 00:43:45.757 "adrfam": "ipv4", 00:43:45.757 "trsvcid": "4420", 00:43:45.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:45.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:45.757 "prchk_reftag": false, 00:43:45.757 "prchk_guard": false, 00:43:45.757 "hdgst": false, 00:43:45.757 "ddgst": false, 00:43:45.757 "psk": "key0", 00:43:45.757 "allow_unrecognized_csi": false, 00:43:45.757 "method": "bdev_nvme_attach_controller", 00:43:45.757 "req_id": 1 00:43:45.757 } 00:43:45.757 Got JSON-RPC error response 00:43:45.757 response: 00:43:45.757 { 00:43:45.757 "code": -19, 00:43:45.757 "message": "No such device" 00:43:45.757 } 00:43:45.757 06:12:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:43:45.757 06:12:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:45.757 06:12:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:45.757 06:12:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:45.757 06:12:19 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:45.757 06:12:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wh5iuUONiF 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:45.757 06:12:19 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:45.757 06:12:19 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:43:45.757 06:12:19 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:45.757 06:12:19 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:45.757 06:12:19 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:43:45.757 06:12:19 keyring_file -- nvmf/common.sh@729 -- # python - 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wh5iuUONiF 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wh5iuUONiF 00:43:45.757 06:12:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.wh5iuUONiF 00:43:45.757 06:12:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wh5iuUONiF 00:43:45.757 06:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wh5iuUONiF 00:43:46.015 06:12:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:46.015 06:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:46.273 nvme0n1 00:43:46.273 06:12:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:46.273 06:12:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:46.273 06:12:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:46.273 06:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:46.273 06:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:46.273 06:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:46.530 06:12:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:46.530 06:12:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:46.530 06:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:46.788 06:12:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:46.788 06:12:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:46.788 06:12:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:46.788 06:12:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:46.788 06:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:47.045 06:12:20 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:47.045 06:12:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:47.045 06:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:47.302 06:12:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:47.302 06:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:47.302 06:12:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:47.559 06:12:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:47.559 06:12:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wh5iuUONiF 00:43:47.559 06:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wh5iuUONiF 00:43:47.559 06:12:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F5y9udOhuk 00:43:47.559 06:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F5y9udOhuk 00:43:47.817 06:12:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:47.817 06:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:48.075 nvme0n1 00:43:48.075 06:12:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:48.075 06:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:48.333 06:12:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:48.333 "subsystems": [ 00:43:48.333 { 00:43:48.333 "subsystem": "keyring", 00:43:48.333 "config": [ 00:43:48.333 { 00:43:48.333 "method": "keyring_file_add_key", 00:43:48.333 "params": { 00:43:48.333 "name": "key0", 00:43:48.333 "path": "/tmp/tmp.wh5iuUONiF" 00:43:48.333 } 00:43:48.333 }, 00:43:48.333 { 00:43:48.333 "method": "keyring_file_add_key", 00:43:48.333 "params": { 00:43:48.333 "name": "key1", 00:43:48.333 "path": "/tmp/tmp.F5y9udOhuk" 00:43:48.333 } 00:43:48.333 } 00:43:48.333 ] 00:43:48.333 }, 00:43:48.333 { 00:43:48.333 "subsystem": "iobuf", 00:43:48.333 "config": [ 00:43:48.333 { 00:43:48.333 "method": "iobuf_set_options", 00:43:48.333 "params": { 00:43:48.333 "small_pool_count": 8192, 00:43:48.333 "large_pool_count": 1024, 00:43:48.333 "small_bufsize": 8192, 00:43:48.333 "large_bufsize": 135168 00:43:48.333 } 00:43:48.333 } 00:43:48.333 ] 00:43:48.333 }, 00:43:48.333 { 00:43:48.333 "subsystem": "sock", 00:43:48.333 "config": [ 00:43:48.333 { 00:43:48.333 "method": "sock_set_default_impl", 00:43:48.333 "params": { 00:43:48.333 "impl_name": "posix" 00:43:48.333 } 00:43:48.333 }, 00:43:48.333 { 00:43:48.333 "method": "sock_impl_set_options", 00:43:48.333 "params": { 00:43:48.333 "impl_name": "ssl", 00:43:48.333 "recv_buf_size": 4096, 00:43:48.333 "send_buf_size": 4096, 00:43:48.333 "enable_recv_pipe": true, 00:43:48.333 "enable_quickack": false, 00:43:48.333 "enable_placement_id": 0, 00:43:48.333 "enable_zerocopy_send_server": true, 00:43:48.333 "enable_zerocopy_send_client": false, 00:43:48.333 "zerocopy_threshold": 0, 00:43:48.333 "tls_version": 0, 00:43:48.333 "enable_ktls": false 00:43:48.333 } 00:43:48.333 }, 00:43:48.333 { 00:43:48.333 "method": "sock_impl_set_options", 00:43:48.333 "params": { 00:43:48.333 "impl_name": "posix", 00:43:48.334 "recv_buf_size": 2097152, 00:43:48.334 "send_buf_size": 2097152, 00:43:48.334 "enable_recv_pipe": true, 00:43:48.334 "enable_quickack": false, 00:43:48.334 "enable_placement_id": 0, 00:43:48.334 "enable_zerocopy_send_server": true, 00:43:48.334 "enable_zerocopy_send_client": false, 00:43:48.334 "zerocopy_threshold": 0, 00:43:48.334 "tls_version": 0, 00:43:48.334 "enable_ktls": false 00:43:48.334 } 00:43:48.334 } 00:43:48.334 ] 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "subsystem": "vmd", 00:43:48.334 "config": [] 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "subsystem": "accel", 00:43:48.334 "config": [ 00:43:48.334 { 00:43:48.334 "method": "accel_set_options", 00:43:48.334 "params": { 00:43:48.334 "small_cache_size": 128, 00:43:48.334 "large_cache_size": 16, 00:43:48.334 "task_count": 2048, 00:43:48.334 "sequence_count": 2048, 00:43:48.334 "buf_count": 2048 00:43:48.334 } 00:43:48.334 } 00:43:48.334 ] 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "subsystem": "bdev", 00:43:48.334 "config": [ 00:43:48.334 { 00:43:48.334 "method": "bdev_set_options", 00:43:48.334 "params": { 00:43:48.334 "bdev_io_pool_size": 65535, 00:43:48.334 "bdev_io_cache_size": 256, 00:43:48.334 "bdev_auto_examine": true, 00:43:48.334 "iobuf_small_cache_size": 128, 00:43:48.334 "iobuf_large_cache_size": 16 00:43:48.334 } 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "method": "bdev_raid_set_options", 00:43:48.334 "params": { 00:43:48.334 "process_window_size_kb": 1024, 00:43:48.334 "process_max_bandwidth_mb_sec": 0 00:43:48.334 } 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "method": "bdev_iscsi_set_options", 00:43:48.334 "params": { 00:43:48.334 "timeout_sec": 30 00:43:48.334 } 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "method": "bdev_nvme_set_options", 00:43:48.334 "params": { 00:43:48.334 "action_on_timeout": "none", 00:43:48.334 "timeout_us": 0, 00:43:48.334 "timeout_admin_us": 0, 00:43:48.334 "keep_alive_timeout_ms": 10000, 00:43:48.334 "arbitration_burst": 0, 00:43:48.334 "low_priority_weight": 0, 00:43:48.334 "medium_priority_weight": 0, 00:43:48.334 "high_priority_weight": 0, 00:43:48.334 "nvme_adminq_poll_period_us": 10000, 00:43:48.334 "nvme_ioq_poll_period_us": 0, 00:43:48.334 "io_queue_requests": 512, 00:43:48.334 "delay_cmd_submit": true, 00:43:48.334 "transport_retry_count": 4, 00:43:48.334 "bdev_retry_count": 3, 00:43:48.334 "transport_ack_timeout": 0, 00:43:48.334 "ctrlr_loss_timeout_sec": 0, 00:43:48.334 "reconnect_delay_sec": 0, 00:43:48.334 "fast_io_fail_timeout_sec": 0, 00:43:48.334 "disable_auto_failback": false, 00:43:48.334 "generate_uuids": false, 00:43:48.334 "transport_tos": 0, 00:43:48.334 "nvme_error_stat": false, 00:43:48.334 "rdma_srq_size": 0, 00:43:48.334 "io_path_stat": false, 00:43:48.334 "allow_accel_sequence": false, 00:43:48.334 "rdma_max_cq_size": 0, 00:43:48.334 "rdma_cm_event_timeout_ms": 0, 00:43:48.334 "dhchap_digests": [ 00:43:48.334 "sha256", 00:43:48.334 "sha384", 00:43:48.334 "sha512" 00:43:48.334 ], 00:43:48.334 "dhchap_dhgroups": [ 00:43:48.334 "null", 00:43:48.334 "ffdhe2048", 00:43:48.334 "ffdhe3072", 00:43:48.334 "ffdhe4096", 00:43:48.334 "ffdhe6144", 00:43:48.334 "ffdhe8192" 00:43:48.334 ] 00:43:48.334 } 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "method": "bdev_nvme_attach_controller", 00:43:48.334 "params": { 00:43:48.334 "name": "nvme0", 00:43:48.334 "trtype": "TCP", 00:43:48.334 "adrfam": "IPv4", 00:43:48.334 "traddr": "127.0.0.1", 00:43:48.334 "trsvcid": "4420", 00:43:48.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:48.334 "prchk_reftag": false, 00:43:48.334 "prchk_guard": false, 00:43:48.334 "ctrlr_loss_timeout_sec": 0, 00:43:48.334 "reconnect_delay_sec": 0, 00:43:48.334 "fast_io_fail_timeout_sec": 0, 00:43:48.334 "psk": "key0", 00:43:48.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:48.334 "hdgst": false, 00:43:48.334 "ddgst": false 00:43:48.334 } 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "method": "bdev_nvme_set_hotplug", 00:43:48.334 "params": { 00:43:48.334 "period_us": 100000, 00:43:48.334 "enable": false 00:43:48.334 } 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "method": "bdev_wait_for_examine" 00:43:48.334 } 00:43:48.334 ] 00:43:48.334 }, 00:43:48.334 { 00:43:48.334 "subsystem": "nbd", 00:43:48.334 "config": [] 00:43:48.334 } 00:43:48.334 ] 00:43:48.334 }' 00:43:48.334 06:12:22 keyring_file -- keyring/file.sh@115 -- # killprocess 3696099 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3696099 ']' 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3696099 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3696099 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3696099' 00:43:48.334 killing process with pid 3696099 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@969 -- # kill 3696099 00:43:48.334 Received shutdown signal, test time was about 1.000000 seconds 00:43:48.334 00:43:48.334 Latency(us) 00:43:48.334 [2024-12-16T05:12:22.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:48.334 [2024-12-16T05:12:22.190Z] =================================================================================================================== 00:43:48.334 [2024-12-16T05:12:22.190Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:48.334 06:12:22 keyring_file -- common/autotest_common.sh@974 -- # wait 3696099 00:43:48.593 06:12:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=3697575 00:43:48.593 06:12:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3697575 /var/tmp/bperf.sock 00:43:48.593 06:12:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3697575 ']' 00:43:48.593 06:12:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:48.593 06:12:22 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:48.593 06:12:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:48.593 06:12:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:48.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:48.593 06:12:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:48.593 "subsystems": [ 00:43:48.593 { 00:43:48.593 "subsystem": "keyring", 00:43:48.593 "config": [ 00:43:48.593 { 00:43:48.593 "method": "keyring_file_add_key", 00:43:48.593 "params": { 00:43:48.593 "name": "key0", 00:43:48.593 "path": "/tmp/tmp.wh5iuUONiF" 00:43:48.593 } 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "method": "keyring_file_add_key", 00:43:48.593 "params": { 00:43:48.593 "name": "key1", 00:43:48.593 "path": "/tmp/tmp.F5y9udOhuk" 00:43:48.593 } 00:43:48.593 } 00:43:48.593 ] 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "subsystem": "iobuf", 00:43:48.593 "config": [ 00:43:48.593 { 00:43:48.593 "method": "iobuf_set_options", 00:43:48.593 "params": { 00:43:48.593 "small_pool_count": 8192, 00:43:48.593 "large_pool_count": 1024, 00:43:48.593 "small_bufsize": 8192, 00:43:48.593 "large_bufsize": 135168 00:43:48.593 } 00:43:48.593 } 00:43:48.593 ] 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "subsystem": "sock", 00:43:48.593 "config": [ 00:43:48.593 { 00:43:48.593 "method": "sock_set_default_impl", 00:43:48.593 "params": { 00:43:48.593 "impl_name": "posix" 00:43:48.593 } 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "method": "sock_impl_set_options", 00:43:48.593 "params": { 00:43:48.593 "impl_name": "ssl", 00:43:48.593 "recv_buf_size": 4096, 00:43:48.593 "send_buf_size": 4096, 00:43:48.593 "enable_recv_pipe": true, 00:43:48.593 "enable_quickack": false, 00:43:48.593 "enable_placement_id": 0, 00:43:48.593 "enable_zerocopy_send_server": true, 00:43:48.593 "enable_zerocopy_send_client": false, 00:43:48.593 "zerocopy_threshold": 0, 00:43:48.593 "tls_version": 0, 00:43:48.593 "enable_ktls": false 00:43:48.593 } 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "method": "sock_impl_set_options", 00:43:48.593 "params": { 00:43:48.593 "impl_name": "posix", 00:43:48.593 "recv_buf_size": 2097152, 00:43:48.593 "send_buf_size": 2097152, 00:43:48.593 "enable_recv_pipe": true, 00:43:48.593 "enable_quickack": false, 00:43:48.593 "enable_placement_id": 0, 00:43:48.593 "enable_zerocopy_send_server": true, 00:43:48.593 "enable_zerocopy_send_client": false, 00:43:48.593 "zerocopy_threshold": 0, 00:43:48.593 "tls_version": 0, 00:43:48.593 "enable_ktls": false 00:43:48.593 } 00:43:48.593 } 00:43:48.593 ] 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "subsystem": "vmd", 00:43:48.593 "config": [] 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "subsystem": "accel", 00:43:48.593 "config": [ 00:43:48.593 { 00:43:48.593 "method": "accel_set_options", 00:43:48.593 "params": { 00:43:48.593 "small_cache_size": 128, 00:43:48.593 "large_cache_size": 16, 00:43:48.593 "task_count": 2048, 00:43:48.593 "sequence_count": 2048, 00:43:48.593 "buf_count": 2048 00:43:48.593 } 00:43:48.593 } 00:43:48.593 ] 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "subsystem": "bdev", 00:43:48.593 "config": [ 00:43:48.593 { 00:43:48.593 "method": "bdev_set_options", 00:43:48.593 "params": { 00:43:48.593 "bdev_io_pool_size": 65535, 00:43:48.593 "bdev_io_cache_size": 256, 00:43:48.593 "bdev_auto_examine": true, 00:43:48.593 "iobuf_small_cache_size": 128, 00:43:48.593 "iobuf_large_cache_size": 16 00:43:48.593 } 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "method": "bdev_raid_set_options", 00:43:48.593 "params": { 00:43:48.593 "process_window_size_kb": 1024, 00:43:48.593 "process_max_bandwidth_mb_sec": 0 00:43:48.593 } 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "method": "bdev_iscsi_set_options", 00:43:48.593 "params": { 00:43:48.593 "timeout_sec": 30 00:43:48.593 } 00:43:48.593 }, 00:43:48.593 { 00:43:48.593 "method": "bdev_nvme_set_options", 00:43:48.593 "params": { 00:43:48.593 "action_on_timeout": "none", 00:43:48.593 "timeout_us": 0, 00:43:48.593 "timeout_admin_us": 0, 00:43:48.593 "keep_alive_timeout_ms": 10000, 00:43:48.593 "arbitration_burst": 0, 00:43:48.593 "low_priority_weight": 0, 00:43:48.593 "medium_priority_weight": 0, 00:43:48.593 "high_priority_weight": 0, 00:43:48.593 "nvme_adminq_poll_period_us": 10000, 00:43:48.593 "nvme_ioq_poll_period_us": 0, 00:43:48.593 "io_queue_requests": 512, 00:43:48.593 "delay_cmd_submit": true, 00:43:48.593 "transport_retry_count": 4, 00:43:48.593 "bdev_retry_count": 3, 00:43:48.593 "transport_ack_timeout": 0, 00:43:48.593 "ctrlr_loss_timeout_sec": 0, 00:43:48.593 "reconnect_delay_sec": 0, 00:43:48.593 "fast_io_fail_timeout_sec": 0, 00:43:48.593 "disable_auto_failback": false, 00:43:48.593 "generate_uuids": false, 00:43:48.593 "transport_tos": 0, 00:43:48.593 "nvme_error_stat": false, 00:43:48.593 "rdma_srq_size": 0, 00:43:48.593 "io_path_stat": false, 00:43:48.593 "allow_accel_sequence": false, 00:43:48.593 "rdma_max_cq_size": 0, 00:43:48.593 "rdma_cm_event_timeout_ms": 0, 00:43:48.593 "dhchap_digests": [ 00:43:48.593 "sha256", 00:43:48.593 "sha384", 00:43:48.593 "sha512" 00:43:48.593 ], 00:43:48.593 "dhchap_dhgroups": [ 00:43:48.593 "null", 00:43:48.593 "ffdhe2048", 00:43:48.593 "ffdhe3072", 00:43:48.593 "ffdhe4096", 00:43:48.593 "ffdhe6144", 00:43:48.593 "ffdhe8192" 00:43:48.593 ] 00:43:48.593 } 00:43:48.594 }, 00:43:48.594 { 00:43:48.594 "method": "bdev_nvme_attach_controller", 00:43:48.594 "params": { 00:43:48.594 "name": "nvme0", 00:43:48.594 "trtype": "TCP", 00:43:48.594 "adrfam": "IPv4", 00:43:48.594 "traddr": "127.0.0.1", 00:43:48.594 "trsvcid": "4420", 00:43:48.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:48.594 "prchk_reftag": false, 00:43:48.594 "prchk_guard": false, 00:43:48.594 "ctrlr_loss_timeout_sec": 0, 00:43:48.594 "reconnect_delay_sec": 0, 00:43:48.594 "fast_io_fail_timeout_sec": 0, 00:43:48.594 "psk": "key0", 00:43:48.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:48.594 "hdgst": false, 00:43:48.594 "ddgst": false 00:43:48.594 } 00:43:48.594 }, 00:43:48.594 { 00:43:48.594 "method": "bdev_nvme_set_hotplug", 00:43:48.594 "params": { 00:43:48.594 "period_us": 100000, 00:43:48.594 "enable": false 00:43:48.594 } 00:43:48.594 }, 00:43:48.594 { 00:43:48.594 "method": "bdev_wait_for_examine" 00:43:48.594 } 00:43:48.594 ] 00:43:48.594 }, 00:43:48.594 { 00:43:48.594 "subsystem": "nbd", 00:43:48.594 "config": [] 00:43:48.594 } 00:43:48.594 ] 00:43:48.594 }' 00:43:48.594 06:12:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:48.594 06:12:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:48.594 [2024-12-16 06:12:22.381690] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:48.594 [2024-12-16 06:12:22.381734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3697575 ] 00:43:48.594 [2024-12-16 06:12:22.436866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:48.852 [2024-12-16 06:12:22.477251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:48.852 [2024-12-16 06:12:22.630861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:49.418 06:12:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:49.418 06:12:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:49.418 06:12:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:49.418 06:12:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:49.418 06:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:49.676 06:12:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:49.676 06:12:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:49.676 06:12:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:49.676 06:12:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:49.676 06:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:49.676 06:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:49.676 06:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:49.933 06:12:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:49.933 06:12:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:49.933 06:12:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:49.933 06:12:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:49.933 06:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:49.933 06:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:49.933 06:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:50.191 06:12:23 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:50.191 06:12:23 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:50.191 06:12:23 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:50.191 06:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:50.191 06:12:23 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:50.191 06:12:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:50.191 06:12:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.wh5iuUONiF /tmp/tmp.F5y9udOhuk 00:43:50.191 06:12:23 keyring_file -- keyring/file.sh@20 -- # killprocess 3697575 00:43:50.191 06:12:23 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3697575 ']' 00:43:50.191 06:12:23 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3697575 00:43:50.191 06:12:23 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:50.191 06:12:23 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:50.191 06:12:23 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3697575 00:43:50.191 06:12:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:50.191 06:12:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:50.191 06:12:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3697575' 00:43:50.191 killing process with pid 3697575 00:43:50.191 06:12:24 keyring_file -- common/autotest_common.sh@969 -- # kill 3697575 00:43:50.191 Received shutdown signal, test time was about 1.000000 seconds 00:43:50.191 00:43:50.191 Latency(us) 00:43:50.191 [2024-12-16T05:12:24.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:50.191 [2024-12-16T05:12:24.047Z] =================================================================================================================== 00:43:50.191 [2024-12-16T05:12:24.047Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:50.191 06:12:24 keyring_file -- common/autotest_common.sh@974 -- # wait 3697575 00:43:50.450 06:12:24 keyring_file -- keyring/file.sh@21 -- # killprocess 3696086 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3696086 ']' 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3696086 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3696086 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3696086' 00:43:50.450 killing process with pid 3696086 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@969 -- # kill 3696086 00:43:50.450 06:12:24 keyring_file -- common/autotest_common.sh@974 -- # wait 3696086 00:43:51.016 00:43:51.016 real 0m11.576s 00:43:51.016 user 0m28.615s 00:43:51.016 sys 0m2.732s 00:43:51.016 06:12:24 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:51.016 06:12:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:51.016 ************************************ 00:43:51.016 END TEST keyring_file 00:43:51.016 ************************************ 00:43:51.016 06:12:24 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:43:51.016 06:12:24 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:51.016 06:12:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:51.016 06:12:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:51.016 06:12:24 -- common/autotest_common.sh@10 -- # set +x 00:43:51.016 ************************************ 00:43:51.016 START TEST keyring_linux 00:43:51.016 ************************************ 00:43:51.016 06:12:24 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:51.016 Joined session keyring: 317674675 00:43:51.016 * Looking for test storage... 00:43:51.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:51.016 06:12:24 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:51.016 06:12:24 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:43:51.016 06:12:24 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:51.016 06:12:24 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:51.016 06:12:24 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:51.017 06:12:24 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:51.017 06:12:24 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.017 --rc genhtml_branch_coverage=1 00:43:51.017 --rc genhtml_function_coverage=1 00:43:51.017 --rc genhtml_legend=1 00:43:51.017 --rc geninfo_all_blocks=1 00:43:51.017 --rc geninfo_unexecuted_blocks=1 00:43:51.017 00:43:51.017 ' 00:43:51.017 06:12:24 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.017 --rc genhtml_branch_coverage=1 00:43:51.017 --rc genhtml_function_coverage=1 00:43:51.017 --rc genhtml_legend=1 00:43:51.017 --rc geninfo_all_blocks=1 00:43:51.017 --rc geninfo_unexecuted_blocks=1 00:43:51.017 00:43:51.017 ' 00:43:51.017 06:12:24 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.017 --rc genhtml_branch_coverage=1 00:43:51.017 --rc genhtml_function_coverage=1 00:43:51.017 --rc genhtml_legend=1 00:43:51.017 --rc geninfo_all_blocks=1 00:43:51.017 --rc geninfo_unexecuted_blocks=1 00:43:51.017 00:43:51.017 ' 00:43:51.017 06:12:24 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.017 --rc genhtml_branch_coverage=1 00:43:51.017 --rc genhtml_function_coverage=1 00:43:51.017 --rc genhtml_legend=1 00:43:51.017 --rc geninfo_all_blocks=1 00:43:51.017 --rc geninfo_unexecuted_blocks=1 00:43:51.017 00:43:51.017 ' 00:43:51.017 06:12:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:51.017 06:12:24 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:51.017 06:12:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.017 06:12:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.017 06:12:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.017 06:12:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:51.017 06:12:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:51.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:51.017 06:12:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:51.017 06:12:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:51.017 06:12:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:51.017 06:12:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:51.017 06:12:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:51.017 06:12:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:51.017 06:12:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:51.017 06:12:24 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:51.276 /tmp/:spdk-test:key0 00:43:51.276 06:12:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:51.276 06:12:24 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:51.276 06:12:24 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:51.276 06:12:24 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:51.276 06:12:24 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:43:51.276 06:12:24 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:51.276 06:12:24 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:51.276 06:12:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:51.276 /tmp/:spdk-test:key1 00:43:51.276 06:12:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3698113 00:43:51.276 06:12:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3698113 00:43:51.276 06:12:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:51.276 06:12:24 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3698113 ']' 00:43:51.276 06:12:24 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:51.276 06:12:24 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:51.276 06:12:24 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:51.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:51.276 06:12:24 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:51.276 06:12:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:51.276 [2024-12-16 06:12:24.996019] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:51.276 [2024-12-16 06:12:24.996068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3698113 ] 00:43:51.276 [2024-12-16 06:12:25.049839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:51.276 [2024-12-16 06:12:25.089673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:51.533 06:12:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:51.533 [2024-12-16 06:12:25.281439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:51.533 null0 00:43:51.533 [2024-12-16 06:12:25.313485] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:51.533 [2024-12-16 06:12:25.313767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.533 06:12:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:51.533 300128490 00:43:51.533 06:12:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:51.533 355597186 00:43:51.533 06:12:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3698123 00:43:51.533 06:12:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3698123 /var/tmp/bperf.sock 00:43:51.533 06:12:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3698123 ']' 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:51.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:51.533 06:12:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:51.533 [2024-12-16 06:12:25.385863] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:51.533 [2024-12-16 06:12:25.385906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3698123 ] 00:43:51.790 [2024-12-16 06:12:25.440866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:51.790 [2024-12-16 06:12:25.480056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:51.790 06:12:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:51.790 06:12:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:51.790 06:12:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:51.790 06:12:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:52.048 06:12:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:52.048 06:12:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:52.306 06:12:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:52.306 06:12:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:52.306 [2024-12-16 06:12:26.110117] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:52.563 nvme0n1 00:43:52.563 06:12:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:52.563 06:12:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:52.563 06:12:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:52.563 06:12:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:52.563 06:12:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:52.563 06:12:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:52.564 06:12:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:52.564 06:12:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:52.564 06:12:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:52.564 06:12:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:52.564 06:12:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:52.564 06:12:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:52.564 06:12:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:52.821 06:12:26 keyring_linux -- keyring/linux.sh@25 -- # sn=300128490 00:43:52.821 06:12:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:52.821 06:12:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:52.821 06:12:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 300128490 == \3\0\0\1\2\8\4\9\0 ]] 00:43:52.821 06:12:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 300128490 00:43:52.821 06:12:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:52.821 06:12:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:52.821 Running I/O for 1 seconds... 00:43:54.194 20868.00 IOPS, 81.52 MiB/s 00:43:54.194 Latency(us) 00:43:54.194 [2024-12-16T05:12:28.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:54.194 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:54.194 nvme0n1 : 1.01 20866.92 81.51 0.00 0.00 6113.14 3698.10 9299.87 00:43:54.194 [2024-12-16T05:12:28.050Z] =================================================================================================================== 00:43:54.194 [2024-12-16T05:12:28.050Z] Total : 20866.92 81.51 0.00 0.00 6113.14 3698.10 9299.87 00:43:54.194 { 00:43:54.194 "results": [ 00:43:54.194 { 00:43:54.194 "job": "nvme0n1", 00:43:54.194 "core_mask": "0x2", 00:43:54.194 "workload": "randread", 00:43:54.194 "status": "finished", 00:43:54.194 "queue_depth": 128, 00:43:54.194 "io_size": 4096, 00:43:54.194 "runtime": 1.006234, 00:43:54.194 "iops": 20866.915647851296, 00:43:54.194 "mibps": 81.51138924941912, 00:43:54.194 "io_failed": 0, 00:43:54.194 "io_timeout": 0, 00:43:54.194 "avg_latency_us": 6113.144445759825, 00:43:54.194 "min_latency_us": 3698.102857142857, 00:43:54.194 "max_latency_us": 9299.870476190476 00:43:54.194 } 00:43:54.194 ], 00:43:54.194 "core_count": 1 00:43:54.194 } 00:43:54.194 06:12:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:54.194 06:12:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:54.194 06:12:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:54.194 06:12:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:54.194 06:12:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:54.194 06:12:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:54.194 06:12:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:54.194 06:12:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:54.452 06:12:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:54.452 06:12:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:54.452 06:12:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:54.452 06:12:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:54.452 06:12:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:54.452 [2024-12-16 06:12:28.261991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:54.452 [2024-12-16 06:12:28.262814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156cae0 (107): Transport endpoint is not connected 00:43:54.452 [2024-12-16 06:12:28.263809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156cae0 (9): Bad file descriptor 00:43:54.452 [2024-12-16 06:12:28.264810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:54.452 [2024-12-16 06:12:28.264827] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:54.452 [2024-12-16 06:12:28.264835] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:54.452 [2024-12-16 06:12:28.264843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:54.452 request: 00:43:54.452 { 00:43:54.452 "name": "nvme0", 00:43:54.452 "trtype": "tcp", 00:43:54.452 "traddr": "127.0.0.1", 00:43:54.452 "adrfam": "ipv4", 00:43:54.452 "trsvcid": "4420", 00:43:54.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:54.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:54.452 "prchk_reftag": false, 00:43:54.452 "prchk_guard": false, 00:43:54.452 "hdgst": false, 00:43:54.452 "ddgst": false, 00:43:54.452 "psk": ":spdk-test:key1", 00:43:54.452 "allow_unrecognized_csi": false, 00:43:54.452 "method": "bdev_nvme_attach_controller", 00:43:54.452 "req_id": 1 00:43:54.452 } 00:43:54.452 Got JSON-RPC error response 00:43:54.452 response: 00:43:54.452 { 00:43:54.452 "code": -5, 00:43:54.452 "message": "Input/output error" 00:43:54.452 } 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:54.452 06:12:28 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:54.453 06:12:28 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@33 -- # sn=300128490 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 300128490 00:43:54.453 1 links removed 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@33 -- # sn=355597186 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 355597186 00:43:54.453 1 links removed 00:43:54.453 06:12:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3698123 00:43:54.453 06:12:28 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3698123 ']' 00:43:54.453 06:12:28 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3698123 00:43:54.453 06:12:28 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:54.453 06:12:28 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:54.453 06:12:28 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3698123 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3698123' 00:43:54.711 killing process with pid 3698123 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@969 -- # kill 3698123 00:43:54.711 Received shutdown signal, test time was about 1.000000 seconds 00:43:54.711 00:43:54.711 Latency(us) 00:43:54.711 [2024-12-16T05:12:28.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:54.711 [2024-12-16T05:12:28.567Z] =================================================================================================================== 00:43:54.711 [2024-12-16T05:12:28.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@974 -- # wait 3698123 00:43:54.711 06:12:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3698113 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3698113 ']' 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3698113 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:54.711 06:12:28 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3698113 00:43:54.969 06:12:28 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:54.969 06:12:28 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:54.969 06:12:28 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3698113' 00:43:54.969 killing process with pid 3698113 00:43:54.969 06:12:28 keyring_linux -- common/autotest_common.sh@969 -- # kill 3698113 00:43:54.969 06:12:28 keyring_linux -- common/autotest_common.sh@974 -- # wait 3698113 00:43:55.228 00:43:55.228 real 0m4.234s 00:43:55.228 user 0m7.866s 00:43:55.228 sys 0m1.430s 00:43:55.228 06:12:28 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:55.228 06:12:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:55.228 ************************************ 00:43:55.228 END TEST keyring_linux 00:43:55.228 ************************************ 00:43:55.228 06:12:28 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:55.228 06:12:28 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:55.228 06:12:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:55.228 06:12:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:55.228 06:12:28 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:55.228 06:12:28 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:55.228 06:12:28 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:55.228 06:12:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:55.228 06:12:28 -- common/autotest_common.sh@10 -- # set +x 00:43:55.228 06:12:28 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:55.228 06:12:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:55.228 06:12:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:55.228 06:12:28 -- common/autotest_common.sh@10 -- # set +x 00:44:00.585 INFO: APP EXITING 00:44:00.585 INFO: killing all VMs 00:44:00.585 INFO: killing vhost app 00:44:00.585 INFO: EXIT DONE 00:44:03.127 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:03.127 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:03.127 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:06.410 Cleaning 00:44:06.410 Removing: /var/run/dpdk/spdk0/config 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:06.410 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:06.410 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:06.410 Removing: /var/run/dpdk/spdk1/config 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:06.410 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:06.410 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:06.410 Removing: /var/run/dpdk/spdk2/config 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:06.410 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:06.410 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:06.410 Removing: /var/run/dpdk/spdk3/config 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:06.410 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:06.410 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:06.410 Removing: /var/run/dpdk/spdk4/config 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:06.410 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:06.410 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:06.410 Removing: /dev/shm/bdev_svc_trace.1 00:44:06.410 Removing: /dev/shm/nvmf_trace.0 00:44:06.410 Removing: /dev/shm/spdk_tgt_trace.pid3148478 00:44:06.410 Removing: /var/run/dpdk/spdk0 00:44:06.410 Removing: /var/run/dpdk/spdk1 00:44:06.410 Removing: /var/run/dpdk/spdk2 00:44:06.410 Removing: /var/run/dpdk/spdk3 00:44:06.410 Removing: /var/run/dpdk/spdk4 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3146415 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3147425 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3148478 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3149099 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3150021 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3150095 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3151155 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3151210 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3151558 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3153039 00:44:06.410 Removing: /var/run/dpdk/spdk_pid3154285 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3154701 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3154870 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3155156 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3155441 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3155686 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3155930 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3156216 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3156939 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3159863 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3160113 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3160363 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3160372 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3160848 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3160853 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3161339 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3161344 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3161718 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3161818 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3162060 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3162075 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3162631 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3162820 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3163159 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3166718 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3170982 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3180786 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3181479 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3186176 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3186429 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3190750 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3196428 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3199118 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3209124 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3217885 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3219669 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3220580 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3237624 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3241625 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3323714 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3329000 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3334639 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3340525 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3340527 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3341412 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3342153 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3342983 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3343648 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3343654 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3343883 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3343961 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3344103 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3344874 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3345680 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3346569 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3347133 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3347233 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3347463 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3348458 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3349429 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3358047 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3385728 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3390267 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3392215 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3394009 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3394061 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3394249 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3394468 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3394880 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3396544 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3397420 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3397771 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3400027 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3400413 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3400997 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3404983 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3410263 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3410265 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3410267 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3414130 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3417873 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3422566 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3457771 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3461814 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3467804 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3469467 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3470771 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3472065 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3476728 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3480675 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3487940 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3487948 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3492424 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3492577 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3492790 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3493235 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3493240 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3494602 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3496373 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3497929 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3499479 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3501121 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3502807 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3508535 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3509093 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3510924 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3512325 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3517925 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3520605 00:44:06.411 Removing: /var/run/dpdk/spdk_pid3525722 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3531120 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3539486 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3546554 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3546563 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3565222 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3565761 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3566230 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3566903 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3567507 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3568085 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3568544 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3569150 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3573169 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3573390 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3579324 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3579385 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3584596 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3588670 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3598179 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3598845 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3603322 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3603616 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3607724 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3613234 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3615719 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3625360 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3633890 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3635565 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3636460 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3652569 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3656314 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3658944 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3666492 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3666497 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3671451 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3673361 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3675274 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3676296 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3678214 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3679441 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3687808 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3688255 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3688716 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3690924 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3691378 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3691881 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3696086 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3696099 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3697575 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3698113 00:44:06.669 Removing: /var/run/dpdk/spdk_pid3698123 00:44:06.669 Clean 00:44:06.927 06:12:40 -- common/autotest_common.sh@1451 -- # return 0 00:44:06.927 06:12:40 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:44:06.927 06:12:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:06.927 06:12:40 -- common/autotest_common.sh@10 -- # set +x 00:44:06.927 06:12:40 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:44:06.927 06:12:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:06.927 06:12:40 -- common/autotest_common.sh@10 -- # set +x 00:44:06.927 06:12:40 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:06.927 06:12:40 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:06.927 06:12:40 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:06.927 06:12:40 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:44:06.927 06:12:40 -- spdk/autotest.sh@394 -- # hostname 00:44:06.927 06:12:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:06.927 geninfo: WARNING: invalid characters removed from testname! 00:44:28.851 06:13:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:29.786 06:13:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:31.687 06:13:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:33.588 06:13:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:35.492 06:13:09 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:37.396 06:13:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:39.299 06:13:12 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:39.299 06:13:12 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:44:39.299 06:13:12 -- common/autotest_common.sh@1681 -- $ lcov --version 00:44:39.299 06:13:12 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:44:39.299 06:13:12 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:44:39.299 06:13:12 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:44:39.299 06:13:12 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:44:39.299 06:13:12 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:44:39.299 06:13:12 -- scripts/common.sh@336 -- $ IFS=.-: 00:44:39.299 06:13:12 -- scripts/common.sh@336 -- $ read -ra ver1 00:44:39.299 06:13:12 -- scripts/common.sh@337 -- $ IFS=.-: 00:44:39.299 06:13:12 -- scripts/common.sh@337 -- $ read -ra ver2 00:44:39.299 06:13:12 -- scripts/common.sh@338 -- $ local 'op=<' 00:44:39.299 06:13:12 -- scripts/common.sh@340 -- $ ver1_l=2 00:44:39.299 06:13:12 -- scripts/common.sh@341 -- $ ver2_l=1 00:44:39.299 06:13:12 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:44:39.299 06:13:12 -- scripts/common.sh@344 -- $ case "$op" in 00:44:39.299 06:13:12 -- scripts/common.sh@345 -- $ : 1 00:44:39.299 06:13:12 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:44:39.299 06:13:12 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:39.299 06:13:12 -- scripts/common.sh@365 -- $ decimal 1 00:44:39.299 06:13:12 -- scripts/common.sh@353 -- $ local d=1 00:44:39.299 06:13:12 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:44:39.299 06:13:12 -- scripts/common.sh@355 -- $ echo 1 00:44:39.299 06:13:12 -- scripts/common.sh@365 -- $ ver1[v]=1 00:44:39.299 06:13:12 -- scripts/common.sh@366 -- $ decimal 2 00:44:39.299 06:13:12 -- scripts/common.sh@353 -- $ local d=2 00:44:39.299 06:13:12 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:44:39.299 06:13:12 -- scripts/common.sh@355 -- $ echo 2 00:44:39.299 06:13:12 -- scripts/common.sh@366 -- $ ver2[v]=2 00:44:39.299 06:13:12 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:44:39.299 06:13:12 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:44:39.299 06:13:12 -- scripts/common.sh@368 -- $ return 0 00:44:39.299 06:13:12 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:39.299 06:13:12 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:44:39.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.299 --rc genhtml_branch_coverage=1 00:44:39.299 --rc genhtml_function_coverage=1 00:44:39.299 --rc genhtml_legend=1 00:44:39.299 --rc geninfo_all_blocks=1 00:44:39.299 --rc geninfo_unexecuted_blocks=1 00:44:39.299 00:44:39.299 ' 00:44:39.299 06:13:12 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:44:39.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.299 --rc genhtml_branch_coverage=1 00:44:39.299 --rc genhtml_function_coverage=1 00:44:39.299 --rc genhtml_legend=1 00:44:39.299 --rc geninfo_all_blocks=1 00:44:39.299 --rc geninfo_unexecuted_blocks=1 00:44:39.299 00:44:39.299 ' 00:44:39.299 06:13:12 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:44:39.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.299 --rc genhtml_branch_coverage=1 00:44:39.299 --rc genhtml_function_coverage=1 00:44:39.299 --rc genhtml_legend=1 00:44:39.299 --rc geninfo_all_blocks=1 00:44:39.299 --rc geninfo_unexecuted_blocks=1 00:44:39.299 00:44:39.299 ' 00:44:39.299 06:13:12 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:44:39.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.299 --rc genhtml_branch_coverage=1 00:44:39.299 --rc genhtml_function_coverage=1 00:44:39.299 --rc genhtml_legend=1 00:44:39.299 --rc geninfo_all_blocks=1 00:44:39.299 --rc geninfo_unexecuted_blocks=1 00:44:39.299 00:44:39.299 ' 00:44:39.299 06:13:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:39.299 06:13:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:44:39.299 06:13:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:39.299 06:13:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:39.299 06:13:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:39.299 06:13:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.299 06:13:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.299 06:13:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.299 06:13:12 -- paths/export.sh@5 -- $ export PATH 00:44:39.299 06:13:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.299 06:13:12 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:39.299 06:13:12 -- common/autobuild_common.sh@479 -- $ date +%s 00:44:39.299 06:13:12 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734325992.XXXXXX 00:44:39.299 06:13:12 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734325992.BXeDWZ 00:44:39.299 06:13:12 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:44:39.299 06:13:12 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:44:39.299 06:13:12 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:44:39.299 06:13:12 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:44:39.299 06:13:12 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:39.299 06:13:12 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:39.299 06:13:12 -- common/autobuild_common.sh@495 -- $ get_config_params 00:44:39.299 06:13:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:44:39.299 06:13:12 -- common/autotest_common.sh@10 -- $ set +x 00:44:39.299 06:13:12 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:44:39.299 06:13:12 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:44:39.299 06:13:12 -- pm/common@17 -- $ local monitor 00:44:39.299 06:13:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:39.299 06:13:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:39.299 06:13:12 -- pm/common@21 -- $ date +%s 00:44:39.299 06:13:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:39.299 06:13:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:39.299 06:13:12 -- pm/common@21 -- $ date +%s 00:44:39.299 06:13:12 -- pm/common@25 -- $ sleep 1 00:44:39.299 06:13:12 -- pm/common@21 -- $ date +%s 00:44:39.299 06:13:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734325992 00:44:39.299 06:13:12 -- pm/common@21 -- $ date +%s 00:44:39.299 06:13:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734325992 00:44:39.299 06:13:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734325992 00:44:39.299 06:13:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1734325992 00:44:39.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734325992_collect-cpu-load.pm.log 00:44:39.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734325992_collect-vmstat.pm.log 00:44:39.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734325992_collect-cpu-temp.pm.log 00:44:39.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1734325992_collect-bmc-pm.bmc.pm.log 00:44:40.237 06:13:13 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:44:40.237 06:13:13 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:44:40.237 06:13:13 -- spdk/autopackage.sh@14 -- $ timing_finish 00:44:40.237 06:13:13 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:40.237 06:13:13 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:40.237 06:13:13 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:40.237 06:13:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:40.237 06:13:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:40.237 06:13:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:40.237 06:13:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:40.237 06:13:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:40.237 06:13:13 -- pm/common@44 -- $ pid=3709823 00:44:40.237 06:13:13 -- pm/common@50 -- $ kill -TERM 3709823 00:44:40.237 06:13:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:40.237 06:13:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:40.237 06:13:13 -- pm/common@44 -- $ pid=3709825 00:44:40.237 06:13:13 -- pm/common@50 -- $ kill -TERM 3709825 00:44:40.237 06:13:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:40.237 06:13:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:40.237 06:13:13 -- pm/common@44 -- $ pid=3709828 00:44:40.237 06:13:13 -- pm/common@50 -- $ kill -TERM 3709828 00:44:40.237 06:13:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:40.237 06:13:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:40.237 06:13:13 -- pm/common@44 -- $ pid=3709846 00:44:40.237 06:13:13 -- pm/common@50 -- $ sudo -E kill -TERM 3709846 00:44:40.237 + [[ -n 3053940 ]] 00:44:40.237 + sudo kill 3053940 00:44:40.247 [Pipeline] } 00:44:40.262 [Pipeline] // stage 00:44:40.267 [Pipeline] } 00:44:40.281 [Pipeline] // timeout 00:44:40.287 [Pipeline] } 00:44:40.301 [Pipeline] // catchError 00:44:40.306 [Pipeline] } 00:44:40.320 [Pipeline] // wrap 00:44:40.326 [Pipeline] } 00:44:40.339 [Pipeline] // catchError 00:44:40.348 [Pipeline] stage 00:44:40.350 [Pipeline] { (Epilogue) 00:44:40.363 [Pipeline] catchError 00:44:40.365 [Pipeline] { 00:44:40.377 [Pipeline] echo 00:44:40.379 Cleanup processes 00:44:40.385 [Pipeline] sh 00:44:40.671 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:40.671 3709969 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:40.671 3710316 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:40.684 [Pipeline] sh 00:44:40.968 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:40.968 ++ grep -v 'sudo pgrep' 00:44:40.968 ++ awk '{print $1}' 00:44:40.968 + sudo kill -9 3709969 00:44:40.979 [Pipeline] sh 00:44:41.260 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:53.482 [Pipeline] sh 00:44:53.766 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:53.766 Artifacts sizes are good 00:44:53.780 [Pipeline] archiveArtifacts 00:44:53.787 Archiving artifacts 00:44:53.975 [Pipeline] sh 00:44:54.310 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:54.346 [Pipeline] cleanWs 00:44:54.352 [WS-CLEANUP] Deleting project workspace... 00:44:54.352 [WS-CLEANUP] Deferred wipeout is used... 00:44:54.357 [WS-CLEANUP] done 00:44:54.359 [Pipeline] } 00:44:54.373 [Pipeline] // catchError 00:44:54.381 [Pipeline] sh 00:44:54.656 + logger -p user.info -t JENKINS-CI 00:44:54.665 [Pipeline] } 00:44:54.678 [Pipeline] // stage 00:44:54.683 [Pipeline] } 00:44:54.697 [Pipeline] // node 00:44:54.702 [Pipeline] End of Pipeline 00:44:54.771 Finished: SUCCESS